Test Report: KVM_Linux_crio 19312

                    
                      83c70ad5a6aa5486f12c3da7bd4d516b254f0dc6:2024-07-29:35557
                    
                

Test fail (31/320)

Order failed test Duration
43 TestAddons/parallel/Ingress 153.28
45 TestAddons/parallel/MetricsServer 350.67
54 TestAddons/StoppedEnableDisable 154.41
148 TestFunctional/parallel/ImageCommands/ImageBuild 7.46
173 TestMultiControlPlane/serial/StopSecondaryNode 141.79
175 TestMultiControlPlane/serial/RestartSecondaryNode 51.78
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.52
180 TestMultiControlPlane/serial/StopCluster 141.67
240 TestMultiNode/serial/RestartKeepsNodes 328.47
242 TestMultiNode/serial/StopMultiNode 141.31
249 TestPreload 273.18
257 TestKubernetesUpgrade 375.97
295 TestPause/serial/SecondStartNoReconfiguration 92.79
323 TestStartStop/group/old-k8s-version/serial/FirstStart 294.61
348 TestStartStop/group/no-preload/serial/Stop 138.98
351 TestStartStop/group/embed-certs/serial/Stop 139.2
354 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.9
355 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
358 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
359 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 105.7
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
365 TestStartStop/group/old-k8s-version/serial/SecondStart 703.72
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.13
367 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.18
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.17
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.37
370 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 440.21
371 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 456.1
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 324.08
373 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 151.98
x
+
TestAddons/parallel/Ingress (153.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-685520 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-685520 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-685520 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2cde6bfb-dbfb-436d-b105-79bd0f65c822] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2cde6bfb-dbfb-436d-b105-79bd0f65c822] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004913707s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-685520 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.165674489s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-685520 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.137
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable ingress-dns --alsologtostderr -v=1: (1.489388862s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable ingress --alsologtostderr -v=1: (7.678675043s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685520 -n addons-685520
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 logs -n 25: (1.134096637s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-009744                                                                     | download-only-009744 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| delete  | -p download-only-881045                                                                     | download-only-881045 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-644927 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | binary-mirror-644927                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46501                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-644927                                                                     | binary-mirror-644927 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-685520 --wait=true                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:21 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-685520 ssh cat                                                                       | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /opt/local-path-provisioner/pvc-144acf15-a758-428b-874b-327ac7591c4a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:21 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-685520 ip                                                                            | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | -p addons-685520                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | -p addons-685520                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-685520 ssh curl -s                                                                   | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-685520 ip                                                                            | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:17:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:17:26.633148 1063162 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:17:26.633416 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:26.633426 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:17:26.633430 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:26.633654 1063162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:17:26.634291 1063162 out.go:298] Setting JSON to false
	I0729 18:17:26.635375 1063162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7199,"bootTime":1722269848,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:17:26.635441 1063162 start.go:139] virtualization: kvm guest
	I0729 18:17:26.637297 1063162 out.go:177] * [addons-685520] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:17:26.638755 1063162 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:17:26.638788 1063162 notify.go:220] Checking for updates...
	I0729 18:17:26.641112 1063162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:17:26.642122 1063162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:17:26.643200 1063162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:26.644236 1063162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:17:26.645258 1063162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:17:26.646416 1063162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:17:26.677411 1063162 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:17:26.678392 1063162 start.go:297] selected driver: kvm2
	I0729 18:17:26.678402 1063162 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:17:26.678413 1063162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:17:26.679131 1063162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:26.679198 1063162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:17:26.693127 1063162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:17:26.693179 1063162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:17:26.693469 1063162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:17:26.693507 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:17:26.693518 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:26.693531 1063162 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:17:26.693608 1063162 start.go:340] cluster config:
	{Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:26.693730 1063162 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:26.695130 1063162 out.go:177] * Starting "addons-685520" primary control-plane node in "addons-685520" cluster
	I0729 18:17:26.696219 1063162 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:17:26.696244 1063162 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:17:26.696254 1063162 cache.go:56] Caching tarball of preloaded images
	I0729 18:17:26.696322 1063162 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:17:26.696335 1063162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:17:26.696674 1063162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json ...
	I0729 18:17:26.696699 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json: {Name:mkb3f974718ada620a37bb6878ab326cdb2590b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:26.696837 1063162 start.go:360] acquireMachinesLock for addons-685520: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:17:26.696894 1063162 start.go:364] duration metric: took 40.511µs to acquireMachinesLock for "addons-685520"
	I0729 18:17:26.696915 1063162 start.go:93] Provisioning new machine with config: &{Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:17:26.697023 1063162 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:17:26.698277 1063162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 18:17:26.698403 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:17:26.698437 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:17:26.712265 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0729 18:17:26.712662 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:17:26.713191 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:17:26.713211 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:17:26.713561 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:17:26.713791 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:26.713949 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:26.714113 1063162 start.go:159] libmachine.API.Create for "addons-685520" (driver="kvm2")
	I0729 18:17:26.714140 1063162 client.go:168] LocalClient.Create starting
	I0729 18:17:26.714184 1063162 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:17:26.771273 1063162 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:17:26.972281 1063162 main.go:141] libmachine: Running pre-create checks...
	I0729 18:17:26.972307 1063162 main.go:141] libmachine: (addons-685520) Calling .PreCreateCheck
	I0729 18:17:26.972824 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:26.973231 1063162 main.go:141] libmachine: Creating machine...
	I0729 18:17:26.973245 1063162 main.go:141] libmachine: (addons-685520) Calling .Create
	I0729 18:17:26.973378 1063162 main.go:141] libmachine: (addons-685520) Creating KVM machine...
	I0729 18:17:26.974575 1063162 main.go:141] libmachine: (addons-685520) DBG | found existing default KVM network
	I0729 18:17:26.975532 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:26.975372 1063183 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 18:17:26.975575 1063162 main.go:141] libmachine: (addons-685520) DBG | created network xml: 
	I0729 18:17:26.975595 1063162 main.go:141] libmachine: (addons-685520) DBG | <network>
	I0729 18:17:26.975606 1063162 main.go:141] libmachine: (addons-685520) DBG |   <name>mk-addons-685520</name>
	I0729 18:17:26.975616 1063162 main.go:141] libmachine: (addons-685520) DBG |   <dns enable='no'/>
	I0729 18:17:26.975642 1063162 main.go:141] libmachine: (addons-685520) DBG |   
	I0729 18:17:26.975657 1063162 main.go:141] libmachine: (addons-685520) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:17:26.975663 1063162 main.go:141] libmachine: (addons-685520) DBG |     <dhcp>
	I0729 18:17:26.975671 1063162 main.go:141] libmachine: (addons-685520) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:17:26.975679 1063162 main.go:141] libmachine: (addons-685520) DBG |     </dhcp>
	I0729 18:17:26.975683 1063162 main.go:141] libmachine: (addons-685520) DBG |   </ip>
	I0729 18:17:26.975690 1063162 main.go:141] libmachine: (addons-685520) DBG |   
	I0729 18:17:26.975694 1063162 main.go:141] libmachine: (addons-685520) DBG | </network>
	I0729 18:17:26.975703 1063162 main.go:141] libmachine: (addons-685520) DBG | 
	I0729 18:17:26.980914 1063162 main.go:141] libmachine: (addons-685520) DBG | trying to create private KVM network mk-addons-685520 192.168.39.0/24...
	I0729 18:17:27.043359 1063162 main.go:141] libmachine: (addons-685520) DBG | private KVM network mk-addons-685520 192.168.39.0/24 created
	I0729 18:17:27.043393 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.043299 1063183 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:27.043407 1063162 main.go:141] libmachine: (addons-685520) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 ...
	I0729 18:17:27.043431 1063162 main.go:141] libmachine: (addons-685520) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:17:27.043458 1063162 main.go:141] libmachine: (addons-685520) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:17:27.306761 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.306633 1063183 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa...
	I0729 18:17:27.455576 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.455436 1063183 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/addons-685520.rawdisk...
	I0729 18:17:27.455610 1063162 main.go:141] libmachine: (addons-685520) DBG | Writing magic tar header
	I0729 18:17:27.455620 1063162 main.go:141] libmachine: (addons-685520) DBG | Writing SSH key tar header
	I0729 18:17:27.455628 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.455550 1063183 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 ...
	I0729 18:17:27.455639 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520
	I0729 18:17:27.455743 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 (perms=drwx------)
	I0729 18:17:27.455767 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:17:27.455794 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:17:27.455806 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:17:27.455822 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:17:27.455831 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:17:27.455844 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:17:27.455859 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:27.455870 1063162 main.go:141] libmachine: (addons-685520) Creating domain...
	I0729 18:17:27.455884 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:17:27.455897 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:17:27.455910 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:17:27.455918 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home
	I0729 18:17:27.455941 1063162 main.go:141] libmachine: (addons-685520) DBG | Skipping /home - not owner
	I0729 18:17:27.456879 1063162 main.go:141] libmachine: (addons-685520) define libvirt domain using xml: 
	I0729 18:17:27.456898 1063162 main.go:141] libmachine: (addons-685520) <domain type='kvm'>
	I0729 18:17:27.456906 1063162 main.go:141] libmachine: (addons-685520)   <name>addons-685520</name>
	I0729 18:17:27.456911 1063162 main.go:141] libmachine: (addons-685520)   <memory unit='MiB'>4000</memory>
	I0729 18:17:27.456916 1063162 main.go:141] libmachine: (addons-685520)   <vcpu>2</vcpu>
	I0729 18:17:27.456924 1063162 main.go:141] libmachine: (addons-685520)   <features>
	I0729 18:17:27.456929 1063162 main.go:141] libmachine: (addons-685520)     <acpi/>
	I0729 18:17:27.456933 1063162 main.go:141] libmachine: (addons-685520)     <apic/>
	I0729 18:17:27.456938 1063162 main.go:141] libmachine: (addons-685520)     <pae/>
	I0729 18:17:27.456942 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.456947 1063162 main.go:141] libmachine: (addons-685520)   </features>
	I0729 18:17:27.456954 1063162 main.go:141] libmachine: (addons-685520)   <cpu mode='host-passthrough'>
	I0729 18:17:27.456959 1063162 main.go:141] libmachine: (addons-685520)   
	I0729 18:17:27.456964 1063162 main.go:141] libmachine: (addons-685520)   </cpu>
	I0729 18:17:27.456969 1063162 main.go:141] libmachine: (addons-685520)   <os>
	I0729 18:17:27.456974 1063162 main.go:141] libmachine: (addons-685520)     <type>hvm</type>
	I0729 18:17:27.456979 1063162 main.go:141] libmachine: (addons-685520)     <boot dev='cdrom'/>
	I0729 18:17:27.456985 1063162 main.go:141] libmachine: (addons-685520)     <boot dev='hd'/>
	I0729 18:17:27.456991 1063162 main.go:141] libmachine: (addons-685520)     <bootmenu enable='no'/>
	I0729 18:17:27.456995 1063162 main.go:141] libmachine: (addons-685520)   </os>
	I0729 18:17:27.457012 1063162 main.go:141] libmachine: (addons-685520)   <devices>
	I0729 18:17:27.457031 1063162 main.go:141] libmachine: (addons-685520)     <disk type='file' device='cdrom'>
	I0729 18:17:27.457041 1063162 main.go:141] libmachine: (addons-685520)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/boot2docker.iso'/>
	I0729 18:17:27.457048 1063162 main.go:141] libmachine: (addons-685520)       <target dev='hdc' bus='scsi'/>
	I0729 18:17:27.457054 1063162 main.go:141] libmachine: (addons-685520)       <readonly/>
	I0729 18:17:27.457061 1063162 main.go:141] libmachine: (addons-685520)     </disk>
	I0729 18:17:27.457067 1063162 main.go:141] libmachine: (addons-685520)     <disk type='file' device='disk'>
	I0729 18:17:27.457075 1063162 main.go:141] libmachine: (addons-685520)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:17:27.457083 1063162 main.go:141] libmachine: (addons-685520)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/addons-685520.rawdisk'/>
	I0729 18:17:27.457090 1063162 main.go:141] libmachine: (addons-685520)       <target dev='hda' bus='virtio'/>
	I0729 18:17:27.457095 1063162 main.go:141] libmachine: (addons-685520)     </disk>
	I0729 18:17:27.457106 1063162 main.go:141] libmachine: (addons-685520)     <interface type='network'>
	I0729 18:17:27.457126 1063162 main.go:141] libmachine: (addons-685520)       <source network='mk-addons-685520'/>
	I0729 18:17:27.457145 1063162 main.go:141] libmachine: (addons-685520)       <model type='virtio'/>
	I0729 18:17:27.457155 1063162 main.go:141] libmachine: (addons-685520)     </interface>
	I0729 18:17:27.457165 1063162 main.go:141] libmachine: (addons-685520)     <interface type='network'>
	I0729 18:17:27.457177 1063162 main.go:141] libmachine: (addons-685520)       <source network='default'/>
	I0729 18:17:27.457187 1063162 main.go:141] libmachine: (addons-685520)       <model type='virtio'/>
	I0729 18:17:27.457198 1063162 main.go:141] libmachine: (addons-685520)     </interface>
	I0729 18:17:27.457208 1063162 main.go:141] libmachine: (addons-685520)     <serial type='pty'>
	I0729 18:17:27.457219 1063162 main.go:141] libmachine: (addons-685520)       <target port='0'/>
	I0729 18:17:27.457230 1063162 main.go:141] libmachine: (addons-685520)     </serial>
	I0729 18:17:27.457242 1063162 main.go:141] libmachine: (addons-685520)     <console type='pty'>
	I0729 18:17:27.457253 1063162 main.go:141] libmachine: (addons-685520)       <target type='serial' port='0'/>
	I0729 18:17:27.457264 1063162 main.go:141] libmachine: (addons-685520)     </console>
	I0729 18:17:27.457274 1063162 main.go:141] libmachine: (addons-685520)     <rng model='virtio'>
	I0729 18:17:27.457286 1063162 main.go:141] libmachine: (addons-685520)       <backend model='random'>/dev/random</backend>
	I0729 18:17:27.457297 1063162 main.go:141] libmachine: (addons-685520)     </rng>
	I0729 18:17:27.457308 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.457316 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.457327 1063162 main.go:141] libmachine: (addons-685520)   </devices>
	I0729 18:17:27.457337 1063162 main.go:141] libmachine: (addons-685520) </domain>
	I0729 18:17:27.457349 1063162 main.go:141] libmachine: (addons-685520) 
	I0729 18:17:27.462907 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:1e:40:45 in network default
	I0729 18:17:27.463376 1063162 main.go:141] libmachine: (addons-685520) Ensuring networks are active...
	I0729 18:17:27.463390 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:27.464025 1063162 main.go:141] libmachine: (addons-685520) Ensuring network default is active
	I0729 18:17:27.464319 1063162 main.go:141] libmachine: (addons-685520) Ensuring network mk-addons-685520 is active
	I0729 18:17:27.465941 1063162 main.go:141] libmachine: (addons-685520) Getting domain xml...
	I0729 18:17:27.466655 1063162 main.go:141] libmachine: (addons-685520) Creating domain...
	I0729 18:17:28.694227 1063162 main.go:141] libmachine: (addons-685520) Waiting to get IP...
	I0729 18:17:28.694946 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:28.695317 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:28.695357 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:28.695289 1063183 retry.go:31] will retry after 285.397876ms: waiting for machine to come up
	I0729 18:17:28.982886 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:28.983272 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:28.983301 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:28.983234 1063183 retry.go:31] will retry after 258.835712ms: waiting for machine to come up
	I0729 18:17:29.244997 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:29.245418 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:29.245446 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:29.245344 1063183 retry.go:31] will retry after 378.941166ms: waiting for machine to come up
	I0729 18:17:29.626029 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:29.626403 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:29.626427 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:29.626344 1063183 retry.go:31] will retry after 593.378281ms: waiting for machine to come up
	I0729 18:17:30.221096 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:30.221580 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:30.221610 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:30.221546 1063183 retry.go:31] will retry after 483.770321ms: waiting for machine to come up
	I0729 18:17:30.707391 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:30.707819 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:30.707848 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:30.707768 1063183 retry.go:31] will retry after 768.217023ms: waiting for machine to come up
	I0729 18:17:31.477691 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:31.478059 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:31.478111 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:31.478010 1063183 retry.go:31] will retry after 853.729951ms: waiting for machine to come up
	I0729 18:17:32.332902 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:32.333238 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:32.333263 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:32.333187 1063183 retry.go:31] will retry after 1.462722028s: waiting for machine to come up
	I0729 18:17:33.797920 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:33.798240 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:33.798269 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:33.798203 1063183 retry.go:31] will retry after 1.301641374s: waiting for machine to come up
	I0729 18:17:35.101553 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:35.101978 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:35.102008 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:35.101914 1063183 retry.go:31] will retry after 1.732879428s: waiting for machine to come up
	I0729 18:17:36.836789 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:36.837227 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:36.837258 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:36.837176 1063183 retry.go:31] will retry after 2.830287802s: waiting for machine to come up
	I0729 18:17:39.668551 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:39.668906 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:39.668935 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:39.668849 1063183 retry.go:31] will retry after 2.912144664s: waiting for machine to come up
	I0729 18:17:42.582296 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:42.582640 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:42.582664 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:42.582616 1063183 retry.go:31] will retry after 4.044303851s: waiting for machine to come up
	I0729 18:17:46.631668 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:46.632062 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:46.632093 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:46.632016 1063183 retry.go:31] will retry after 4.332408449s: waiting for machine to come up
	I0729 18:17:50.967922 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:50.968392 1063162 main.go:141] libmachine: (addons-685520) Found IP for machine: 192.168.39.137
	I0729 18:17:50.968413 1063162 main.go:141] libmachine: (addons-685520) Reserving static IP address...
	I0729 18:17:50.968427 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has current primary IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:50.968782 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find host DHCP lease matching {name: "addons-685520", mac: "52:54:00:5a:98:d7", ip: "192.168.39.137"} in network mk-addons-685520
	I0729 18:17:51.080239 1063162 main.go:141] libmachine: (addons-685520) DBG | Getting to WaitForSSH function...
	I0729 18:17:51.080270 1063162 main.go:141] libmachine: (addons-685520) Reserved static IP address: 192.168.39.137
	I0729 18:17:51.080284 1063162 main.go:141] libmachine: (addons-685520) Waiting for SSH to be available...
	I0729 18:17:51.082620 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.083014 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.083039 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.083269 1063162 main.go:141] libmachine: (addons-685520) DBG | Using SSH client type: external
	I0729 18:17:51.083299 1063162 main.go:141] libmachine: (addons-685520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa (-rw-------)
	I0729 18:17:51.083331 1063162 main.go:141] libmachine: (addons-685520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:17:51.083352 1063162 main.go:141] libmachine: (addons-685520) DBG | About to run SSH command:
	I0729 18:17:51.083368 1063162 main.go:141] libmachine: (addons-685520) DBG | exit 0
	I0729 18:17:51.211082 1063162 main.go:141] libmachine: (addons-685520) DBG | SSH cmd err, output: <nil>: 
	I0729 18:17:51.211420 1063162 main.go:141] libmachine: (addons-685520) KVM machine creation complete!
	I0729 18:17:51.211688 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:51.225148 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:51.225476 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:51.225686 1063162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:17:51.225702 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:17:51.226931 1063162 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:17:51.226949 1063162 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:17:51.226958 1063162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:17:51.226966 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.229414 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.229734 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.229765 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.229858 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.230009 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.230175 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.230338 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.230496 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.230727 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.230742 1063162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:17:51.330175 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:51.330202 1063162 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:17:51.330210 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.332848 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.333233 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.333265 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.333387 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.333628 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.333806 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.333926 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.334086 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.334306 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.334319 1063162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:17:51.435676 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:17:51.435784 1063162 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:17:51.435794 1063162 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:17:51.435802 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.436065 1063162 buildroot.go:166] provisioning hostname "addons-685520"
	I0729 18:17:51.436109 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.436322 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.438999 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.439359 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.439383 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.439538 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.439802 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.440053 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.440215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.440369 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.440545 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.440556 1063162 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685520 && echo "addons-685520" | sudo tee /etc/hostname
	I0729 18:17:51.553992 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685520
	
	I0729 18:17:51.554026 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.556561 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.556885 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.556914 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.557006 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.557215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.557375 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.557522 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.557684 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.557885 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.557907 1063162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685520/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:17:51.669488 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:51.669525 1063162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:17:51.669575 1063162 buildroot.go:174] setting up certificates
	I0729 18:17:51.669585 1063162 provision.go:84] configureAuth start
	I0729 18:17:51.669596 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.669874 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:51.672562 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.672893 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.672921 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.673069 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.674837 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.675097 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.675120 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.675257 1063162 provision.go:143] copyHostCerts
	I0729 18:17:51.675325 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:17:51.693643 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:17:51.693783 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:17:51.693889 1063162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.addons-685520 san=[127.0.0.1 192.168.39.137 addons-685520 localhost minikube]
	I0729 18:17:51.781189 1063162 provision.go:177] copyRemoteCerts
	I0729 18:17:51.781280 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:17:51.781321 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.783881 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.784176 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.784209 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.784402 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.784603 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.784784 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.784929 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:51.865611 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:17:51.889380 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:17:51.912104 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:17:51.934755 1063162 provision.go:87] duration metric: took 265.153857ms to configureAuth
	I0729 18:17:51.934788 1063162 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:17:51.935033 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:17:51.935154 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.937640 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.937972 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.937998 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.938234 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.938433 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.938575 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.938706 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.938868 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.939079 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.939096 1063162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:17:52.219421 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:17:52.219449 1063162 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:17:52.219457 1063162 main.go:141] libmachine: (addons-685520) Calling .GetURL
	I0729 18:17:52.220680 1063162 main.go:141] libmachine: (addons-685520) DBG | Using libvirt version 6000000
	I0729 18:17:52.222536 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.222891 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.222922 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.223042 1063162 main.go:141] libmachine: Docker is up and running!
	I0729 18:17:52.223059 1063162 main.go:141] libmachine: Reticulating splines...
	I0729 18:17:52.223069 1063162 client.go:171] duration metric: took 25.508917331s to LocalClient.Create
	I0729 18:17:52.223100 1063162 start.go:167] duration metric: took 25.508987948s to libmachine.API.Create "addons-685520"
	I0729 18:17:52.223114 1063162 start.go:293] postStartSetup for "addons-685520" (driver="kvm2")
	I0729 18:17:52.223128 1063162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:17:52.223154 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.223363 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:17:52.223390 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.225300 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.225623 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.225654 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.225735 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.225910 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.226053 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.226178 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.304794 1063162 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:17:52.308972 1063162 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:17:52.308998 1063162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:17:52.309067 1063162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:17:52.309095 1063162 start.go:296] duration metric: took 85.973543ms for postStartSetup
	I0729 18:17:52.309154 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:52.309799 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:52.312220 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.312529 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.312563 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.312802 1063162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json ...
	I0729 18:17:52.312991 1063162 start.go:128] duration metric: took 25.615954023s to createHost
	I0729 18:17:52.313017 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.314926 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.315206 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.315225 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.315372 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.315557 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.315721 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.315830 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.315986 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:52.316147 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:52.316157 1063162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:17:52.415668 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277072.398150012
	
	I0729 18:17:52.415697 1063162 fix.go:216] guest clock: 1722277072.398150012
	I0729 18:17:52.415707 1063162 fix.go:229] Guest: 2024-07-29 18:17:52.398150012 +0000 UTC Remote: 2024-07-29 18:17:52.31300445 +0000 UTC m=+25.712117221 (delta=85.145562ms)
	I0729 18:17:52.415769 1063162 fix.go:200] guest clock delta is within tolerance: 85.145562ms
	I0729 18:17:52.415777 1063162 start.go:83] releasing machines lock for "addons-685520", held for 25.718870288s
	I0729 18:17:52.415810 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.416109 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:52.418579 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.418968 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.419002 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.419141 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419596 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419781 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419898 1063162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:17:52.419958 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.420004 1063162 ssh_runner.go:195] Run: cat /version.json
	I0729 18:17:52.420033 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.422381 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422618 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422725 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.422751 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422884 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.422895 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.422910 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.423104 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.423120 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.423292 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.423302 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.423473 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.423478 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.423585 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.495937 1063162 ssh_runner.go:195] Run: systemctl --version
	I0729 18:17:52.521164 1063162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:17:52.674215 1063162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:17:52.679991 1063162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:17:52.680059 1063162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:17:52.695810 1063162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:17:52.695833 1063162 start.go:495] detecting cgroup driver to use...
	I0729 18:17:52.695899 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:17:52.712166 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:17:52.725431 1063162 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:17:52.725487 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:17:52.738640 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:17:52.751930 1063162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:17:52.859138 1063162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:17:52.984987 1063162 docker.go:233] disabling docker service ...
	I0729 18:17:52.985064 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:17:52.998872 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:17:53.011183 1063162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:17:53.143979 1063162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:17:53.254262 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:17:53.267347 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:17:53.284938 1063162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:17:53.285001 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.294699 1063162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:17:53.294775 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.304377 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.313903 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.323393 1063162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:17:53.333148 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.342512 1063162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.358251 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.367577 1063162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:17:53.375985 1063162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:17:53.376025 1063162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:17:53.387531 1063162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:17:53.396063 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:53.506702 1063162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:17:53.637567 1063162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:17:53.637671 1063162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:17:53.642267 1063162 start.go:563] Will wait 60s for crictl version
	I0729 18:17:53.642319 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:17:53.645902 1063162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:17:53.687089 1063162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:17:53.687212 1063162 ssh_runner.go:195] Run: crio --version
	I0729 18:17:53.713135 1063162 ssh_runner.go:195] Run: crio --version
	I0729 18:17:53.740297 1063162 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:17:53.741307 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:53.743773 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:53.744082 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:53.744128 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:53.744268 1063162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:17:53.747913 1063162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:53.759660 1063162 kubeadm.go:883] updating cluster {Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:17:53.759770 1063162 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:17:53.759810 1063162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:53.790863 1063162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:17:53.790955 1063162 ssh_runner.go:195] Run: which lz4
	I0729 18:17:53.794728 1063162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:17:53.798704 1063162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:17:53.798733 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:17:55.120480 1063162 crio.go:462] duration metric: took 1.325790418s to copy over tarball
	I0729 18:17:55.120550 1063162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:17:57.319798 1063162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199214521s)
	I0729 18:17:57.319829 1063162 crio.go:469] duration metric: took 2.199321686s to extract the tarball
	I0729 18:17:57.319837 1063162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:17:57.358781 1063162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:57.402948 1063162 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:17:57.402977 1063162 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:17:57.402986 1063162 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.30.3 crio true true} ...
	I0729 18:17:57.403096 1063162 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-685520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:17:57.403164 1063162 ssh_runner.go:195] Run: crio config
	I0729 18:17:57.460297 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:17:57.460321 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:57.460333 1063162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:17:57.460365 1063162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685520 NodeName:addons-685520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:17:57.460606 1063162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:17:57.460686 1063162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:17:57.470694 1063162 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:17:57.470769 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:17:57.480262 1063162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:17:57.496454 1063162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:17:57.512246 1063162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 18:17:57.527977 1063162 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I0729 18:17:57.531655 1063162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:57.543693 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:57.671958 1063162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:17:57.688826 1063162 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520 for IP: 192.168.39.137
	I0729 18:17:57.688850 1063162 certs.go:194] generating shared ca certs ...
	I0729 18:17:57.688875 1063162 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.689025 1063162 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:17:57.862993 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt ...
	I0729 18:17:57.863027 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt: {Name:mk9f304cf49c7d2aa9b461e4f3ca18d09f0cad83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.863208 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key ...
	I0729 18:17:57.863219 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key: {Name:mkab627b76824e32d9f70531bc9f1fd6eeb74b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.863295 1063162 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:17:57.952352 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt ...
	I0729 18:17:57.952381 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt: {Name:mk12989f28a3c6ca3daca4dc40bcb2f8edc6b8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.952535 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key ...
	I0729 18:17:57.952548 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key: {Name:mk7dace36ae553a84062c4457d59130f9f8809f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.952638 1063162 certs.go:256] generating profile certs ...
	I0729 18:17:57.952711 1063162 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key
	I0729 18:17:57.952725 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt with IP's: []
	I0729 18:17:58.011535 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt ...
	I0729 18:17:58.011565 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: {Name:mk969dbd6edf45753b9b2fba68004f24b24fa7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.011717 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key ...
	I0729 18:17:58.011728 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key: {Name:mk12e25d6383e86bb755720ac4be733251d0e975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.011791 1063162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2
	I0729 18:17:58.011809 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137]
	I0729 18:17:58.107099 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 ...
	I0729 18:17:58.107130 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2: {Name:mke734cbe519b247c9a5babef04cba4185efc323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.107289 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2 ...
	I0729 18:17:58.107302 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2: {Name:mkd3dc52b7d7fc07280178029f636d3cacf4490e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.107362 1063162 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt
	I0729 18:17:58.107441 1063162 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key
	I0729 18:17:58.107489 1063162 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key
	I0729 18:17:58.107507 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt with IP's: []
	I0729 18:17:58.272506 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt ...
	I0729 18:17:58.272538 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt: {Name:mk9d9c105be286c2d9ecb17af8e01253b559066a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.272695 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key ...
	I0729 18:17:58.272708 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key: {Name:mk8b87660164a26c8b76ad308286e5e101f93be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.272870 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:17:58.272915 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:17:58.272940 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:17:58.272975 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:17:58.273628 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:17:58.297868 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:17:58.321500 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:17:58.344382 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:17:58.368279 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 18:17:58.401675 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:17:58.429119 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:17:58.452162 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:17:58.474534 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:17:58.496480 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:17:58.512199 1063162 ssh_runner.go:195] Run: openssl version
	I0729 18:17:58.517611 1063162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:17:58.527977 1063162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.532366 1063162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.532432 1063162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.538133 1063162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:17:58.548825 1063162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:17:58.552803 1063162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:17:58.552859 1063162 kubeadm.go:392] StartCluster: {Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:58.552957 1063162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:17:58.553007 1063162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:17:58.588846 1063162 cri.go:89] found id: ""
	I0729 18:17:58.588931 1063162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:17:58.598900 1063162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:17:58.610086 1063162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:17:58.624600 1063162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:17:58.624621 1063162 kubeadm.go:157] found existing configuration files:
	
	I0729 18:17:58.624671 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:17:58.634416 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:17:58.634477 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:17:58.643426 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:17:58.652016 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:17:58.652069 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:17:58.660901 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:17:58.669343 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:17:58.669386 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:17:58.678312 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:17:58.686734 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:17:58.686781 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:17:58.695954 1063162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:17:58.884970 1063162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:18:09.044876 1063162 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:18:09.044930 1063162 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:18:09.045012 1063162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:18:09.045147 1063162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:18:09.045289 1063162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:18:09.045352 1063162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:18:09.046558 1063162 out.go:204]   - Generating certificates and keys ...
	I0729 18:18:09.046649 1063162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:18:09.046724 1063162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:18:09.046809 1063162 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:18:09.046898 1063162 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:18:09.046997 1063162 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:18:09.047042 1063162 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:18:09.047100 1063162 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:18:09.047260 1063162 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685520 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0729 18:18:09.047315 1063162 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:18:09.047470 1063162 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685520 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0729 18:18:09.047568 1063162 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:18:09.047659 1063162 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:18:09.047701 1063162 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:18:09.047775 1063162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:18:09.047845 1063162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:18:09.047924 1063162 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:18:09.047998 1063162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:18:09.048085 1063162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:18:09.048167 1063162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:18:09.048278 1063162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:18:09.048347 1063162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:18:09.049417 1063162 out.go:204]   - Booting up control plane ...
	I0729 18:18:09.049503 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:18:09.049577 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:18:09.049634 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:18:09.049730 1063162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:18:09.049816 1063162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:18:09.049872 1063162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:18:09.050010 1063162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:18:09.050073 1063162 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:18:09.050124 1063162 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.02152ms
	I0729 18:18:09.050188 1063162 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:18:09.050251 1063162 kubeadm.go:310] [api-check] The API server is healthy after 5.001159759s
	I0729 18:18:09.050340 1063162 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:18:09.050454 1063162 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:18:09.050517 1063162 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:18:09.050687 1063162 kubeadm.go:310] [mark-control-plane] Marking the node addons-685520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:18:09.050770 1063162 kubeadm.go:310] [bootstrap-token] Using token: h69d3p.copq4a8ve97e77q5
	I0729 18:18:09.052052 1063162 out.go:204]   - Configuring RBAC rules ...
	I0729 18:18:09.052158 1063162 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:18:09.052260 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:18:09.052460 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:18:09.052572 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:18:09.052665 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:18:09.052766 1063162 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:18:09.052907 1063162 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:18:09.052964 1063162 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:18:09.053014 1063162 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:18:09.053023 1063162 kubeadm.go:310] 
	I0729 18:18:09.053087 1063162 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:18:09.053097 1063162 kubeadm.go:310] 
	I0729 18:18:09.053197 1063162 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:18:09.053207 1063162 kubeadm.go:310] 
	I0729 18:18:09.053258 1063162 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:18:09.053342 1063162 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:18:09.053416 1063162 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:18:09.053425 1063162 kubeadm.go:310] 
	I0729 18:18:09.053502 1063162 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:18:09.053510 1063162 kubeadm.go:310] 
	I0729 18:18:09.053585 1063162 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:18:09.053594 1063162 kubeadm.go:310] 
	I0729 18:18:09.053679 1063162 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:18:09.053766 1063162 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:18:09.053869 1063162 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:18:09.053881 1063162 kubeadm.go:310] 
	I0729 18:18:09.053986 1063162 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:18:09.054090 1063162 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:18:09.054100 1063162 kubeadm.go:310] 
	I0729 18:18:09.054213 1063162 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h69d3p.copq4a8ve97e77q5 \
	I0729 18:18:09.054362 1063162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 18:18:09.054383 1063162 kubeadm.go:310] 	--control-plane 
	I0729 18:18:09.054387 1063162 kubeadm.go:310] 
	I0729 18:18:09.054467 1063162 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:18:09.054474 1063162 kubeadm.go:310] 
	I0729 18:18:09.054571 1063162 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h69d3p.copq4a8ve97e77q5 \
	I0729 18:18:09.054686 1063162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 18:18:09.054697 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:18:09.054706 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:18:09.055974 1063162 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:18:09.056889 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:18:09.067139 1063162 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:18:09.086094 1063162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:18:09.086206 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:09.086258 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685520 minikube.k8s.io/updated_at=2024_07_29T18_18_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=addons-685520 minikube.k8s.io/primary=true
	I0729 18:18:09.122511 1063162 ops.go:34] apiserver oom_adj: -16
	I0729 18:18:09.212623 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:09.712958 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:10.212763 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:10.713244 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:11.213676 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:11.713400 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:12.212862 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:12.713085 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:13.212875 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:13.712988 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:14.213640 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:14.713071 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:15.212659 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:15.712896 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:16.213372 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:16.713525 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:17.212719 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:17.712844 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:18.212963 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:18.713669 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:19.213140 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:19.713429 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:20.213564 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:20.712733 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:21.212707 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:21.713452 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:22.213588 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:22.330351 1063162 kubeadm.go:1113] duration metric: took 13.244198827s to wait for elevateKubeSystemPrivileges
	I0729 18:18:22.330401 1063162 kubeadm.go:394] duration metric: took 23.777548413s to StartCluster
	I0729 18:18:22.330430 1063162 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:18:22.330642 1063162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:18:22.331096 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:18:22.331351 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:18:22.331377 1063162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 18:18:22.331352 1063162 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:18:22.331498 1063162 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685520"
	I0729 18:18:22.331503 1063162 addons.go:69] Setting storage-provisioner=true in profile "addons-685520"
	I0729 18:18:22.331530 1063162 addons.go:234] Setting addon storage-provisioner=true in "addons-685520"
	I0729 18:18:22.331548 1063162 addons.go:69] Setting inspektor-gadget=true in profile "addons-685520"
	I0729 18:18:22.331565 1063162 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685520"
	I0729 18:18:22.331580 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:18:22.331597 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331603 1063162 addons.go:69] Setting metrics-server=true in profile "addons-685520"
	I0729 18:18:22.331602 1063162 addons.go:69] Setting volcano=true in profile "addons-685520"
	I0729 18:18:22.331583 1063162 addons.go:234] Setting addon inspektor-gadget=true in "addons-685520"
	I0729 18:18:22.331612 1063162 addons.go:69] Setting volumesnapshots=true in profile "addons-685520"
	I0729 18:18:22.331629 1063162 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685520"
	I0729 18:18:22.331580 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331650 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331677 1063162 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685520"
	I0729 18:18:22.331680 1063162 addons.go:234] Setting addon volumesnapshots=true in "addons-685520"
	I0729 18:18:22.331714 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331722 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331491 1063162 addons.go:69] Setting yakd=true in profile "addons-685520"
	I0729 18:18:22.331747 1063162 addons.go:69] Setting helm-tiller=true in profile "addons-685520"
	I0729 18:18:22.331765 1063162 addons.go:234] Setting addon yakd=true in "addons-685520"
	I0729 18:18:22.331785 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331805 1063162 addons.go:234] Setting addon helm-tiller=true in "addons-685520"
	I0729 18:18:22.331832 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331484 1063162 addons.go:69] Setting ingress-dns=true in profile "addons-685520"
	I0729 18:18:22.331942 1063162 addons.go:234] Setting addon ingress-dns=true in "addons-685520"
	I0729 18:18:22.331986 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.332107 1063162 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685520"
	I0729 18:18:22.332130 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332108 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332144 1063162 addons.go:69] Setting default-storageclass=true in profile "addons-685520"
	I0729 18:18:22.332133 1063162 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685520"
	I0729 18:18:22.332165 1063162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685520"
	I0729 18:18:22.332224 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332225 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332253 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332374 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332404 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.331788 1063162 addons.go:234] Setting addon volcano=true in "addons-685520"
	I0729 18:18:22.332453 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332455 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332464 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.332472 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332479 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332494 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332512 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332793 1063162 addons.go:69] Setting ingress=true in profile "addons-685520"
	I0729 18:18:22.332835 1063162 addons.go:234] Setting addon ingress=true in "addons-685520"
	I0729 18:18:22.331525 1063162 addons.go:69] Setting registry=true in profile "addons-685520"
	I0729 18:18:22.332865 1063162 addons.go:234] Setting addon registry=true in "addons-685520"
	I0729 18:18:22.331621 1063162 addons.go:234] Setting addon metrics-server=true in "addons-685520"
	I0729 18:18:22.332136 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332920 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.331492 1063162 addons.go:69] Setting cloud-spanner=true in profile "addons-685520"
	I0729 18:18:22.332985 1063162 addons.go:234] Setting addon cloud-spanner=true in "addons-685520"
	I0729 18:18:22.333013 1063162 addons.go:69] Setting gcp-auth=true in profile "addons-685520"
	I0729 18:18:22.333033 1063162 mustload.go:65] Loading cluster: addons-685520
	I0729 18:18:22.333145 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333184 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333191 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333225 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333252 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.333496 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.333606 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333650 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333870 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:18:22.333996 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.334038 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334217 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.334253 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334286 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334339 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.334890 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.335249 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.335269 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.339619 1063162 out.go:177] * Verifying Kubernetes components...
	I0729 18:18:22.343326 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.350963 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.352580 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:18:22.356686 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0729 18:18:22.356951 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0729 18:18:22.357497 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.357563 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.358115 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.358134 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.358135 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.358154 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.358552 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.358718 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.359297 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.359328 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.360071 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0729 18:18:22.360594 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.360616 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.360823 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.361323 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.361351 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.361726 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.362312 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.362341 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.364969 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0729 18:18:22.368775 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0729 18:18:22.369851 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0729 18:18:22.371510 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.371556 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.371728 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.371825 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372116 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372200 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0729 18:18:22.372452 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.372473 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.372617 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.372630 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.372693 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372879 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.372932 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.373041 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.373303 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.373322 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.373400 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.373432 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0729 18:18:22.373661 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.373930 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.373948 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.374429 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.374473 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.374771 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.375430 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.375464 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.381625 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0729 18:18:22.381767 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.382789 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0729 18:18:22.383403 1063162 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685520"
	I0729 18:18:22.383452 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.383657 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.383677 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.383811 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.383840 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.384275 1063162 addons.go:234] Setting addon default-storageclass=true in "addons-685520"
	I0729 18:18:22.384321 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.384675 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.384704 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.385396 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.385480 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.385536 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.386413 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.386431 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.386545 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.386555 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.386937 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.386952 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.386979 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.387508 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.387549 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.387734 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.388011 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.388841 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0729 18:18:22.389415 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.390059 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.390081 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.390584 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.391194 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.391230 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.395220 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.397103 1063162 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 18:18:22.398116 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 18:18:22.398134 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 18:18:22.398156 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.401717 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.402084 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.402107 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.402382 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.402593 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.402766 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.402963 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.409105 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0729 18:18:22.409699 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.410306 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.410331 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.410783 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.411064 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0729 18:18:22.411199 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.412206 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.412868 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.412888 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.412954 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.413719 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.415278 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0729 18:18:22.415738 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.416307 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 18:18:22.416309 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.416453 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.416922 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.417164 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.417236 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0729 18:18:22.417800 1063162 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 18:18:22.417819 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 18:18:22.417840 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.418143 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.418203 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.418724 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.418779 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.419286 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.419891 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.419928 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.420268 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.421244 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.421715 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.421749 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.421770 1063162 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 18:18:22.421955 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.422698 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.422943 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.423020 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 18:18:22.423038 1063162 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 18:18:22.423060 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.423145 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.423248 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.423870 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:22.424990 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:22.425986 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.426989 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.427017 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.427239 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.427477 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.427498 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0729 18:18:22.427709 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.427742 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0729 18:18:22.428021 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.428092 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 18:18:22.428391 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.428693 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.428711 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.429401 1063162 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 18:18:22.429419 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 18:18:22.429435 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.430072 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.430273 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0729 18:18:22.430465 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.430749 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.430840 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.431566 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.431592 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.431684 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.431700 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.432091 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.432095 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.432325 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.432367 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.433899 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.434445 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.434481 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.434732 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.435025 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.435036 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.435218 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.435421 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0729 18:18:22.435676 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.435771 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.436068 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.436088 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.436564 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.436590 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.436878 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0729 18:18:22.436986 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.437282 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.437353 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:18:22.437366 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 18:18:22.437418 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 18:18:22.437857 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.437917 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.438335 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.438343 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
	I0729 18:18:22.438818 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.438898 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.438899 1063162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:18:22.438963 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:18:22.438977 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.438977 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.439372 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.439397 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.439624 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.439664 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.439688 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.439833 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.439945 1063162 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 18:18:22.439982 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 18:18:22.441054 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 18:18:22.441061 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 18:18:22.441076 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 18:18:22.441094 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.441919 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.443237 1063162 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 18:18:22.443322 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 18:18:22.443390 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.443920 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.443944 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.444183 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.444497 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.444516 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.444704 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.444950 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.444986 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.445009 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.445304 1063162 out.go:177]   - Using image docker.io/busybox:stable
	I0729 18:18:22.445407 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.445569 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.445609 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 18:18:22.445676 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.445765 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.446760 1063162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 18:18:22.446779 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 18:18:22.446798 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.448446 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 18:18:22.449574 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 18:18:22.450169 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.450571 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.450615 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.450779 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.451002 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.451182 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.451304 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.451499 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 18:18:22.452429 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0729 18:18:22.452435 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 18:18:22.452450 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 18:18:22.452469 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.453379 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.453966 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.453987 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.454358 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.454997 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.455034 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.456426 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.456649 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46715
	I0729 18:18:22.456800 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.456819 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.457016 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.457088 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.457297 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.457481 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.457659 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.458728 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.458751 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.459012 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0729 18:18:22.459163 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.459678 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.460280 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.460321 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.460840 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0729 18:18:22.461183 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.461200 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.461307 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.461731 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.461749 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.461857 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0729 18:18:22.462167 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.462376 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.462449 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.462726 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.463047 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.463071 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.463518 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.463560 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.463700 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.464522 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.464630 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.464669 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.464794 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0729 18:18:22.465333 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.465482 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0729 18:18:22.465858 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.466021 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.466038 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.466408 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.466455 1063162 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 18:18:22.466633 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.466655 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.466755 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.467036 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.467188 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.467590 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 18:18:22.467611 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 18:18:22.467630 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.468994 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0729 18:18:22.469189 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.469558 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.469601 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.469834 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.470391 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.471038 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.471063 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.471296 1063162 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 18:18:22.471411 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.471593 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.471834 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.471853 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.471890 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.472027 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.472204 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.472353 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:18:22.472368 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.472372 1063162 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:18:22.472391 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.472535 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.474451 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.475956 1063162 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 18:18:22.476159 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.476732 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.476760 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.476989 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.477157 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.477183 1063162 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 18:18:22.477198 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 18:18:22.477215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.477275 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.477367 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.480533 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.480979 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.481007 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.481393 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.481806 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.482008 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.482172 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.482606 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0729 18:18:22.483062 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.483555 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.483579 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.483979 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.484183 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.486057 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.487411 1063162 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 18:18:22.488571 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0729 18:18:22.488591 1063162 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 18:18:22.488607 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 18:18:22.488625 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.489275 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.489286 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0729 18:18:22.489736 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.489766 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.490163 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.490369 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.491376 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.492033 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.492130 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.492295 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:22.492307 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:22.492473 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:22.492485 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:22.492493 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:22.492501 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:22.493999 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:22.494005 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.494016 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:22.494021 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.494032 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 18:18:22.494109 1063162 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 18:18:22.494175 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.494322 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.494441 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.494460 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.494514 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.494688 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.494791 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.494881 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0729 18:18:22.495024 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.495369 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.495908 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.495925 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.496286 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.496492 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.497742 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.498124 1063162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:18:22.498144 1063162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:18:22.498163 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.500766 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.501169 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.501230 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.501390 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.501561 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.501735 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.501893 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.518836 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0729 18:18:22.519317 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.519836 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.519861 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.520244 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.520476 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.522243 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.523934 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 18:18:22.525061 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 18:18:22.525084 1063162 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 18:18:22.525104 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.528071 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.528558 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.528593 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.528690 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.528862 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.529156 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.529309 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.944375 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 18:18:22.963063 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 18:18:22.963093 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 18:18:22.981106 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:18:22.981126 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 18:18:22.990559 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 18:18:23.066003 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 18:18:23.066029 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 18:18:23.075865 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:18:23.095979 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 18:18:23.113734 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 18:18:23.113759 1063162 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 18:18:23.113766 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 18:18:23.113786 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 18:18:23.118367 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 18:18:23.118386 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 18:18:23.119623 1063162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:18:23.119705 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:18:23.150916 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:18:23.150939 1063162 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:18:23.155300 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 18:18:23.155317 1063162 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 18:18:23.195270 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 18:18:23.200443 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 18:18:23.206235 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 18:18:23.206251 1063162 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 18:18:23.224064 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:18:23.270943 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 18:18:23.270976 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 18:18:23.296250 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 18:18:23.296277 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 18:18:23.317006 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 18:18:23.317029 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 18:18:23.330777 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:18:23.330798 1063162 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:18:23.359688 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 18:18:23.359709 1063162 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 18:18:23.381594 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 18:18:23.381615 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 18:18:23.398315 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:18:23.444635 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 18:18:23.444671 1063162 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 18:18:23.495974 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 18:18:23.496003 1063162 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 18:18:23.594683 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 18:18:23.603558 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 18:18:23.603590 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 18:18:23.613263 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 18:18:23.613288 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 18:18:23.639271 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 18:18:23.639295 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 18:18:23.761845 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 18:18:23.761869 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 18:18:23.778256 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 18:18:23.778286 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 18:18:23.818019 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 18:18:23.860333 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 18:18:23.860375 1063162 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 18:18:23.892807 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 18:18:23.892836 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 18:18:23.971529 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 18:18:23.971557 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 18:18:24.049351 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 18:18:24.049381 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 18:18:24.086003 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 18:18:24.086031 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 18:18:24.090441 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 18:18:24.126601 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 18:18:24.126627 1063162 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 18:18:24.236682 1063162 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:24.236706 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 18:18:24.402644 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 18:18:24.402672 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 18:18:24.410730 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 18:18:24.410754 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 18:18:24.567737 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:24.671744 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 18:18:24.671771 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 18:18:24.758100 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 18:18:24.878770 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 18:18:24.878805 1063162 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 18:18:25.137378 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 18:18:25.466020 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.521599565s)
	I0729 18:18:25.466089 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466103 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466127 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.475543607s)
	I0729 18:18:25.466152 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466170 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466188 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.390300928s)
	I0729 18:18:25.466212 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466223 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466538 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466578 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.466592 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466602 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466614 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466664 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466695 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.466700 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466715 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466746 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466763 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466973 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.467012 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.467073 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.467093 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.467078 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.467140 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466583 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.468075 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.468099 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.468122 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.468822 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.468841 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.468854 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.489651 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.489677 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.489962 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.489991 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.489978 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.692958 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.596941662s)
	I0729 18:18:27.693012 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.693027 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.693046 1063162 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.573391477s)
	I0729 18:18:27.693112 1063162 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.573381774s)
	I0729 18:18:27.693189 1063162 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 18:18:27.693315 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.693359 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.693367 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.693380 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.693387 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.694261 1063162 node_ready.go:35] waiting up to 6m0s for node "addons-685520" to be "Ready" ...
	I0729 18:18:27.694407 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.694426 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.694451 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.715696 1063162 node_ready.go:49] node "addons-685520" has status "Ready":"True"
	I0729 18:18:27.715724 1063162 node_ready.go:38] duration metric: took 21.432494ms for node "addons-685520" to be "Ready" ...
	I0729 18:18:27.715736 1063162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:18:27.762161 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.762184 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.762601 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.762651 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.762665 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.770000 1063162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace to be "Ready" ...
	I0729 18:18:28.226346 1063162 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685520" context rescaled to 1 replicas
	I0729 18:18:29.571799 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 18:18:29.571868 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:29.575097 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.575609 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:29.575642 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.575809 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:29.576019 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:29.576221 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:29.576390 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:29.794883 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 18:18:29.850291 1063162 addons.go:234] Setting addon gcp-auth=true in "addons-685520"
	I0729 18:18:29.850366 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:29.850717 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:29.850758 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:29.866878 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0729 18:18:29.867367 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:29.867975 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:29.868008 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:29.868418 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:29.869023 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:29.869051 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:29.884975 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0729 18:18:29.885399 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:29.885864 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:29.885881 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:29.886288 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:29.886493 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:29.888360 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:29.888598 1063162 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 18:18:29.888618 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:29.891425 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.891888 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:29.891917 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.892031 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:29.892189 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:29.892300 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:29.892422 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:29.901878 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:30.425683 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.23037748s)
	I0729 18:18:30.425742 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425753 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425765 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.225294136s)
	I0729 18:18:30.425803 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425817 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425874 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.201790205s)
	I0729 18:18:30.425899 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425908 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425938 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.027587455s)
	I0729 18:18:30.425960 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425975 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426061 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.831340202s)
	I0729 18:18:30.426083 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426093 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426105 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.426134 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426141 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426149 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426156 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426159 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.608115824s)
	I0729 18:18:30.426060 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426176 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426185 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426184 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.426187 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426189 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426203 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426211 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426222 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426212 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426231 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426254 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.335791379s)
	I0729 18:18:30.426269 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426277 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426409 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.858630332s)
	W0729 18:18:30.426451 1063162 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 18:18:30.426479 1063162 retry.go:31] will retry after 372.084847ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 18:18:30.426563 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.66843055s)
	I0729 18:18:30.426585 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426594 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.430989 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.430996 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431020 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431019 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431029 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431039 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431046 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431047 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431068 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431070 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431087 1063162 addons.go:475] Verifying addon ingress=true in "addons-685520"
	I0729 18:18:30.431100 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431106 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431111 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431115 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431123 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431134 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431143 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431150 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431150 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431188 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431173 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431202 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431209 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431215 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431221 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431192 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431028 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431245 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431255 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431231 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431271 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431284 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431292 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431501 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431516 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431491 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431538 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431568 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431576 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431578 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431589 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431599 1063162 addons.go:475] Verifying addon metrics-server=true in "addons-685520"
	I0729 18:18:30.431599 1063162 addons.go:475] Verifying addon registry=true in "addons-685520"
	I0729 18:18:30.431651 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431902 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431929 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.433016 1063162 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685520 service yakd-dashboard -n yakd-dashboard
	
	I0729 18:18:30.433028 1063162 out.go:177] * Verifying registry addon...
	I0729 18:18:30.433088 1063162 out.go:177] * Verifying ingress addon...
	I0729 18:18:30.431683 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.433142 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.434765 1063162 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 18:18:30.434833 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 18:18:30.473814 1063162 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 18:18:30.473838 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:30.479830 1063162 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 18:18:30.479863 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:30.798763 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:30.941792 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:30.941870 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:31.481787 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:31.482256 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:31.544680 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.407225434s)
	I0729 18:18:31.544708 1063162 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.656085901s)
	I0729 18:18:31.544750 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:31.544765 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:31.545088 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:31.545109 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:31.545123 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:31.545135 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:31.545149 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:31.545387 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:31.545404 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:31.545414 1063162 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685520"
	I0729 18:18:31.545389 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:31.546272 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:31.546988 1063162 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 18:18:31.548140 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 18:18:31.548907 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 18:18:31.549091 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 18:18:31.549105 1063162 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 18:18:31.588944 1063162 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 18:18:31.588968 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:31.667966 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 18:18:31.667990 1063162 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 18:18:31.747307 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 18:18:31.747329 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 18:18:31.824265 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 18:18:31.941309 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:31.945324 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.056231 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:32.285629 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:32.440690 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.441659 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:32.555831 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:32.869259 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.070436089s)
	I0729 18:18:32.869315 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:32.869330 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:32.869793 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:32.869813 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:32.869830 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:32.869847 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:32.869858 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:32.870153 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:32.870216 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:32.870242 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:32.945050 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.945684 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:33.054807 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:33.378077 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.553773854s)
	I0729 18:18:33.378144 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:33.378162 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:33.378436 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:33.378474 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:33.378481 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:33.378488 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:33.378498 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:33.378723 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:33.378735 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:33.378762 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:33.380845 1063162 addons.go:475] Verifying addon gcp-auth=true in "addons-685520"
	I0729 18:18:33.382093 1063162 out.go:177] * Verifying gcp-auth addon...
	I0729 18:18:33.383822 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 18:18:33.406186 1063162 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 18:18:33.406205 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:33.447822 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:33.452533 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:33.585415 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:33.888376 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:33.939942 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:33.940234 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:34.072640 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:34.388208 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:34.446422 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:34.449929 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:34.555081 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:34.777043 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:34.889448 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:34.944666 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:34.947877 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.057628 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:35.276074 1063162 pod_ready.go:97] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.137 HostIPs:[{IP:192.168.39
.137}] PodIP: PodIPs:[] StartTime:2024-07-29 18:18:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 18:18:23 +0000 UTC,FinishedAt:2024-07-29 18:18:33 +0000 UTC,ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923 Started:0xc0021801c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 18:18:35.276114 1063162 pod_ready.go:81] duration metric: took 7.506080732s for pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace to be "Ready" ...
	E0729 18:18:35.276131 1063162 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.137 HostIPs:[{IP:192.168.39.137}] PodIP: PodIPs:[] StartTime:2024-07-29 18:18:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 18:18:23 +0000 UTC,FinishedAt:2024-07-29 18:18:33 +0000 UTC,ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923 Started:0xc0021801c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 18:18:35.276142 1063162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace to be "Ready" ...
	I0729 18:18:35.387735 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:35.440649 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:35.442153 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.555031 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:35.888331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:35.939619 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.941246 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.054564 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:36.387668 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:36.440783 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.440869 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:36.554489 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:36.887590 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:36.940813 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.943550 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:37.054187 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:37.282460 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:37.387780 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:37.652350 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:37.652441 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:37.654774 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:37.887889 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:37.940548 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:37.942298 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.055160 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:38.388086 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:38.440372 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:38.440599 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.554825 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:38.889004 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:38.942092 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.942272 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:39.054825 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:39.282981 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:39.386817 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:39.446437 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:39.447685 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:39.554890 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:39.887723 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:39.940508 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:39.941874 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:40.054425 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:40.388065 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:40.441762 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:40.448420 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:40.567120 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:40.887070 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:40.942718 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:40.949257 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:41.054220 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:41.387440 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:41.439804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:41.440125 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:41.554289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:41.782015 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:41.886961 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:41.941540 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:41.941887 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.057266 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:42.664989 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.665388 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:42.665563 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:42.667438 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:42.888017 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:42.940423 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.942180 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:43.054559 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:43.388423 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:43.439585 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:43.440050 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:43.554868 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:43.782303 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:43.887661 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:43.941109 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:43.943928 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:44.055192 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:44.388092 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:44.440884 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:44.441025 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:44.554629 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:44.887386 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:44.940001 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:44.941512 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.054356 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:45.387832 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:45.439790 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:45.440150 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.554657 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:45.963746 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.965252 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:45.972406 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:45.973237 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:46.054973 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:46.388044 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:46.439750 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:46.441004 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:46.557509 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:46.888220 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:46.941202 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:46.941376 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:47.057107 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:47.387693 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:47.440965 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:47.442072 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:47.557125 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:47.887589 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:47.940838 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:47.941570 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:48.054447 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:48.283469 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:48.387608 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:48.441061 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:48.441804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:48.555173 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:48.887230 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:48.939240 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:48.940716 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.054976 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:49.388304 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:49.442644 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.442821 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:49.555185 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:49.887788 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:49.941386 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.941913 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.054641 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:50.388513 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:50.441616 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:50.441693 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.554450 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:50.782157 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:50.887896 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:50.940144 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.941384 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.055331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:51.390537 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:51.456887 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:51.457380 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.554289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:51.888020 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:51.941718 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.942102 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:52.055000 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:52.387247 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:52.442199 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:52.444451 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:52.553888 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:52.887966 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:52.942545 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:52.942767 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.054365 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:53.656646 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:53.659090 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.660684 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:53.662712 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:53.663897 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:53.887479 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:53.941838 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.942315 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.062331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:54.387656 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:54.446690 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.447311 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:54.554311 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:54.892053 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:54.940708 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.946804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:55.053827 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:55.388093 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:55.438400 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:55.441171 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:55.554994 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:55.781885 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:55.921933 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:55.941088 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:55.943298 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.055534 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:56.388931 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:56.440192 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.441007 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:56.555396 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:56.887781 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:56.940252 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.941685 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:57.054411 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:57.387676 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:57.441291 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:57.441447 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:57.555104 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.035985 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.036677 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.037201 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:58.038234 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:58.053160 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.387424 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.440987 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:58.441029 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.557087 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.886830 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.939748 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.940003 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.054866 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:59.387323 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:59.438833 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:59.440095 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.555228 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:59.889105 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:59.943980 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.945096 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:00.054526 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:00.282904 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:00.388284 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:00.439314 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:00.442329 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:00.554114 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:00.892796 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:00.940600 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:00.940636 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:01.053539 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:01.387165 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:01.440458 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:01.440566 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:01.554387 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.016328 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.016358 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:02.017818 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.054377 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.281178 1063162 pod_ready.go:92] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.281201 1063162 pod_ready.go:81] duration metric: took 27.005051176s for pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.281211 1063162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.285247 1063162 pod_ready.go:92] pod "etcd-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.285264 1063162 pod_ready.go:81] duration metric: took 4.047747ms for pod "etcd-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.285273 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.289340 1063162 pod_ready.go:92] pod "kube-apiserver-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.289355 1063162 pod_ready.go:81] duration metric: took 4.076496ms for pod "kube-apiserver-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.289362 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.293640 1063162 pod_ready.go:92] pod "kube-controller-manager-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.293653 1063162 pod_ready.go:81] duration metric: took 4.285177ms for pod "kube-controller-manager-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.293662 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnslr" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.297070 1063162 pod_ready.go:92] pod "kube-proxy-bnslr" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.297082 1063162 pod_ready.go:81] duration metric: took 3.415233ms for pod "kube-proxy-bnslr" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.297088 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.387067 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.438720 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:02.439990 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.553866 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.679836 1063162 pod_ready.go:92] pod "kube-scheduler-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.679859 1063162 pod_ready.go:81] duration metric: took 382.764517ms for pod "kube-scheduler-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.679868 1063162 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.898928 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.951105 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.951549 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:03.054043 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:03.387546 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:03.439204 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:03.439720 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:03.554334 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:03.887221 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:03.940442 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:03.940801 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:04.053805 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:04.387545 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:04.440456 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:04.440813 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:04.554525 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:04.686029 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:04.887181 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:04.941196 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:04.941612 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:05.054202 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:05.388721 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:05.441037 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:05.441538 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:05.554996 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:05.887901 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:05.941775 1063162 kapi.go:107] duration metric: took 35.506938989s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 18:19:05.943046 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:06.054069 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:06.387559 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:06.439228 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:06.554781 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:06.687322 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:06.887950 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:06.950969 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:07.054223 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:07.387306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:07.561092 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:07.562224 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:07.887809 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:07.938961 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:08.054398 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:08.387390 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:08.439509 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:08.554380 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:08.887244 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:08.938745 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:09.388792 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:09.391383 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:09.392780 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:09.439904 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:09.554018 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:09.887306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:09.939310 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:10.055790 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:10.388435 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:10.439811 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:10.554655 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:10.887506 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:10.939038 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:11.054749 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:11.387349 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:11.438834 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:11.555053 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:11.685797 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:11.891182 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:11.940568 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:12.055875 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:12.387787 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:12.442204 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:12.556026 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:12.887815 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:12.939800 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:13.054158 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:13.387913 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:13.439631 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:13.554666 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:13.687186 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:13.888677 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:13.938960 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:14.056286 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:14.388008 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:14.439735 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:14.555083 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:14.887904 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:14.939910 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:15.064703 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:15.387659 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:15.447046 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:15.561675 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:15.887021 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:15.939193 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:16.053957 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:16.185600 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:16.388027 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:16.439800 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:16.555210 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:16.886994 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:16.940065 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:17.053475 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:17.387849 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:17.438728 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:17.554826 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:17.889045 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:17.939519 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:18.054548 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:18.547102 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:18.551175 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:18.553503 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:18.556009 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:18.887339 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:18.940563 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:19.063735 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:19.387289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:19.450671 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:19.555719 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:19.888392 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:19.940047 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:20.054589 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:20.387308 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:20.442668 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:20.555341 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:20.685162 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:20.887241 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:20.938815 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:21.059680 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:21.386983 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:21.439564 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:21.555108 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:21.887325 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:21.938705 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:22.054704 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:22.387861 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:22.439524 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:22.557464 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:22.687127 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:22.887067 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:22.940135 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:23.054045 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:23.387908 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:23.441177 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:23.554783 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:23.887666 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:23.939430 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:24.054185 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:24.387149 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:24.442100 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:24.554549 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:24.887445 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:24.939617 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:25.057306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:25.187311 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:25.387293 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:25.438460 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:25.555709 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:25.888849 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:25.939278 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:26.054706 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:26.387929 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:26.439326 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:26.555025 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:26.887797 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:26.939251 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:27.055884 1063162 kapi.go:107] duration metric: took 55.506971084s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 18:19:27.387935 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:27.439640 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:27.688009 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:27.887318 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:27.938920 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:28.387968 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:28.439424 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:28.888139 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:28.938248 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:29.386946 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:29.439503 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:29.887876 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:29.939403 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:30.186416 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:30.387331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:30.438756 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:30.888362 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:30.938958 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:31.387900 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:31.439792 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:31.887862 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:31.939407 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:32.387675 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:32.439038 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:32.685657 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:32.887749 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:32.939262 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:33.387817 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:33.438970 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:33.888007 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:33.939512 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:34.387641 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:34.439265 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:34.686864 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:34.887841 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:34.940783 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:35.388805 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:35.440644 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:35.888572 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:35.940416 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:36.394173 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:36.440668 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:36.888755 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:36.939727 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:37.186576 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:37.388483 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:37.439463 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.358600 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.359024 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.387546 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.439020 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.887289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.939278 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:39.387770 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:39.440494 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:39.688774 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:39.887811 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:39.940057 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:40.388592 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:40.439676 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:40.887130 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:40.940565 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:41.391445 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:41.439617 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:41.887528 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:41.940550 1063162 kapi.go:107] duration metric: took 1m11.505776888s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 18:19:42.186458 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:42.387429 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:42.889238 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:43.388329 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:43.891131 1063162 kapi.go:107] duration metric: took 1m10.507305023s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 18:19:43.892454 1063162 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685520 cluster.
	I0729 18:19:43.893796 1063162 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 18:19:43.895367 1063162 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 18:19:43.896948 1063162 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 18:19:43.898042 1063162 addons.go:510] duration metric: took 1m21.566659903s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner ingress-dns inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 18:19:44.686139 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:47.186932 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:49.685648 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:51.686038 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:54.187333 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:56.687987 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:58.185576 1063162 pod_ready.go:92] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:58.185608 1063162 pod_ready.go:81] duration metric: took 55.50573426s for pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.185619 1063162 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.190007 1063162 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:58.190024 1063162 pod_ready.go:81] duration metric: took 4.398682ms for pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.190039 1063162 pod_ready.go:38] duration metric: took 1m30.474290108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:19:58.190070 1063162 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:19:58.190100 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:19:58.190149 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:19:58.243586 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:19:58.243616 1063162 cri.go:89] found id: ""
	I0729 18:19:58.243628 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:19:58.243696 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.250303 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:19:58.250369 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:19:58.288964 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:19:58.288992 1063162 cri.go:89] found id: ""
	I0729 18:19:58.289001 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:19:58.289051 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.293025 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:19:58.293094 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:19:58.333426 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:19:58.333464 1063162 cri.go:89] found id: ""
	I0729 18:19:58.333474 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:19:58.333542 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.337610 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:19:58.337691 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:19:58.376345 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:19:58.376385 1063162 cri.go:89] found id: ""
	I0729 18:19:58.376396 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:19:58.376462 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.380677 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:19:58.380735 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:19:58.424109 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:19:58.424134 1063162 cri.go:89] found id: ""
	I0729 18:19:58.424142 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:19:58.424195 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.428267 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:19:58.428339 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:19:58.464573 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:19:58.464592 1063162 cri.go:89] found id: ""
	I0729 18:19:58.464603 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:19:58.464666 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.469665 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:19:58.469731 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:19:58.527618 1063162 cri.go:89] found id: ""
	I0729 18:19:58.527654 1063162 logs.go:276] 0 containers: []
	W0729 18:19:58.527667 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:19:58.527680 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:19:58.527703 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:19:58.717570 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:19:58.717600 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:19:58.761822 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:19:58.761855 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:19:58.825529 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:19:58.825567 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:19:58.878700 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.878882 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.879025 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.879181 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.880067 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.880220 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:19:58.908031 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:19:58.908063 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:19:58.922757 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:19:58.922784 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:19:58.981605 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:19:58.981639 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:19:59.055315 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:19:59.055348 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:19:59.110816 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:19:59.110858 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:19:59.151952 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:19:59.151982 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:00.063422 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:00.063494 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:00.123881 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:00.123920 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:00.124000 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:00.124018 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124035 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124047 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124055 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124064 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:00.124072 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:00.124083 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:10.124592 1063162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:20:10.158696 1063162 api_server.go:72] duration metric: took 1m47.82716557s to wait for apiserver process to appear ...
	I0729 18:20:10.158731 1063162 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:20:10.158774 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:20:10.158834 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:20:10.233393 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:10.233422 1063162 cri.go:89] found id: ""
	I0729 18:20:10.233433 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:20:10.233502 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.238607 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:20:10.238679 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:20:10.311518 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:10.311541 1063162 cri.go:89] found id: ""
	I0729 18:20:10.311553 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:20:10.311610 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.317247 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:20:10.317307 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:20:10.416836 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:10.416868 1063162 cri.go:89] found id: ""
	I0729 18:20:10.416878 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:20:10.416952 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.425550 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:20:10.425624 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:20:10.490746 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:10.490768 1063162 cri.go:89] found id: ""
	I0729 18:20:10.490777 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:20:10.490840 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.497973 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:20:10.498036 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:20:10.551296 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:10.551318 1063162 cri.go:89] found id: ""
	I0729 18:20:10.551326 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:20:10.551381 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.562105 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:20:10.562170 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:20:10.600175 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:10.600198 1063162 cri.go:89] found id: ""
	I0729 18:20:10.600207 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:20:10.600261 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.604568 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:20:10.604646 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:20:10.645461 1063162 cri.go:89] found id: ""
	I0729 18:20:10.645494 1063162 logs.go:276] 0 containers: []
	W0729 18:20:10.645506 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:20:10.645518 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:20:10.645532 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:10.687274 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:20:10.687304 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:10.724258 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:20:10.724288 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:10.782014 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:10.782053 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:10.834133 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:20:10.834168 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:20:10.892131 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892318 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892478 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892657 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.893536 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.893703 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:10.921132 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:20:10.921157 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:20:11.059580 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:20:11.059610 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:11.175756 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:20:11.175791 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:11.828336 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:20:11.828445 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:20:11.844887 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:20:11.844922 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:11.887472 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:20:11.887505 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:11.928316 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:11.928344 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:11.928401 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:11.928412 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928419 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928434 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928447 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928460 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:11.928471 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:11.928480 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:21.929595 1063162 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0729 18:20:21.935957 1063162 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0729 18:20:21.938368 1063162 api_server.go:141] control plane version: v1.30.3
	I0729 18:20:21.938388 1063162 api_server.go:131] duration metric: took 11.779651063s to wait for apiserver health ...
	I0729 18:20:21.938397 1063162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:20:21.938427 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:20:21.938482 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:20:21.999694 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:21.999721 1063162 cri.go:89] found id: ""
	I0729 18:20:21.999732 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:20:21.999803 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.004054 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:20:22.004104 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:20:22.042177 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:22.042206 1063162 cri.go:89] found id: ""
	I0729 18:20:22.042217 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:20:22.042275 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.046502 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:20:22.046578 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:20:22.084443 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:22.084471 1063162 cri.go:89] found id: ""
	I0729 18:20:22.084480 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:20:22.084543 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.088882 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:20:22.088962 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:20:22.126414 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:22.126436 1063162 cri.go:89] found id: ""
	I0729 18:20:22.126447 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:20:22.126512 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.131166 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:20:22.131245 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:20:22.181997 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:22.182019 1063162 cri.go:89] found id: ""
	I0729 18:20:22.182027 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:20:22.182080 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.186268 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:20:22.186322 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:20:22.229450 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:22.229471 1063162 cri.go:89] found id: ""
	I0729 18:20:22.229480 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:20:22.229532 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.233827 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:20:22.233891 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:20:22.274011 1063162 cri.go:89] found id: ""
	I0729 18:20:22.274040 1063162 logs.go:276] 0 containers: []
	W0729 18:20:22.274048 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:20:22.274058 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:20:22.274072 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:20:22.394236 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:20:22.394269 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:22.452095 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:20:22.452136 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:23.327908 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:23.327956 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:23.384187 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:20:23.384220 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:23.445115 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:20:23.445157 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:20:23.496324 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496498 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496637 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496787 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.497642 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.497792 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:23.526388 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:20:23.526417 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:20:23.542197 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:20:23.542233 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:23.588900 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:20:23.588932 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:23.627768 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:20:23.627802 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:23.669642 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:20:23.669678 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:23.706702 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:23.706731 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:23.706797 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:23.706811 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706825 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706834 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706842 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706883 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:23.706891 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:23.706902 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:33.718719 1063162 system_pods.go:59] 18 kube-system pods found
	I0729 18:20:33.718753 1063162 system_pods.go:61] "coredns-7db6d8ff4d-zrfkz" [8f1412dd-5eec-49c8-88ea-9725e2ecc017] Running
	I0729 18:20:33.718758 1063162 system_pods.go:61] "csi-hostpath-attacher-0" [2c53773b-3b70-4b61-a9fa-242a1091f327] Running
	I0729 18:20:33.718762 1063162 system_pods.go:61] "csi-hostpath-resizer-0" [612cf202-3d1a-4859-ad72-0b5bfc16aec6] Running
	I0729 18:20:33.718765 1063162 system_pods.go:61] "csi-hostpathplugin-sfz6c" [19694c0f-aaad-4ada-be53-34f11202d797] Running
	I0729 18:20:33.718769 1063162 system_pods.go:61] "etcd-addons-685520" [2ad20938-ce5a-499d-a013-72d8b49e61fb] Running
	I0729 18:20:33.718772 1063162 system_pods.go:61] "kube-apiserver-addons-685520" [3559744b-f0ab-4459-a201-ce4e37003789] Running
	I0729 18:20:33.718775 1063162 system_pods.go:61] "kube-controller-manager-addons-685520" [66f49f54-d749-452d-8f01-675f6f16e53c] Running
	I0729 18:20:33.718778 1063162 system_pods.go:61] "kube-ingress-dns-minikube" [a22a2df5-68df-492e-8478-b1fa2ed6d45a] Running
	I0729 18:20:33.718781 1063162 system_pods.go:61] "kube-proxy-bnslr" [dea08c83-eebf-47be-ba32-65ae4fd51a9b] Running
	I0729 18:20:33.718784 1063162 system_pods.go:61] "kube-scheduler-addons-685520" [d88158ca-7d50-455d-aa7b-9fc2ae7883d0] Running
	I0729 18:20:33.718789 1063162 system_pods.go:61] "metrics-server-c59844bb4-qt4qg" [46b5fee1-ed94-4adc-a131-a0d90438dbaf] Running
	I0729 18:20:33.718794 1063162 system_pods.go:61] "nvidia-device-plugin-daemonset-4bzd5" [0edbc902-4717-462e-8c98-1e0af3da0c72] Running
	I0729 18:20:33.718798 1063162 system_pods.go:61] "registry-698f998955-grn4f" [ae9be054-2ae9-4bb2-91af-3a601d969805] Running
	I0729 18:20:33.718803 1063162 system_pods.go:61] "registry-proxy-sxvm2" [07822b9d-56b6-4aab-bce3-512310b7497f] Running
	I0729 18:20:33.718807 1063162 system_pods.go:61] "snapshot-controller-745499f584-4x8xg" [5218abd3-b463-4a1f-9f77-df15193cea8f] Running
	I0729 18:20:33.718811 1063162 system_pods.go:61] "snapshot-controller-745499f584-8wwkm" [da494473-4096-4488-ace0-8361335052a0] Running
	I0729 18:20:33.718821 1063162 system_pods.go:61] "storage-provisioner" [6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea] Running
	I0729 18:20:33.718829 1063162 system_pods.go:61] "tiller-deploy-6677d64bcd-nl6s4" [018ede57-0c16-4231-aab9-8a15f104da71] Running
	I0729 18:20:33.718838 1063162 system_pods.go:74] duration metric: took 11.780431776s to wait for pod list to return data ...
	I0729 18:20:33.718860 1063162 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:20:33.720733 1063162 default_sa.go:45] found service account: "default"
	I0729 18:20:33.720750 1063162 default_sa.go:55] duration metric: took 1.882401ms for default service account to be created ...
	I0729 18:20:33.720757 1063162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:20:33.728981 1063162 system_pods.go:86] 18 kube-system pods found
	I0729 18:20:33.729003 1063162 system_pods.go:89] "coredns-7db6d8ff4d-zrfkz" [8f1412dd-5eec-49c8-88ea-9725e2ecc017] Running
	I0729 18:20:33.729008 1063162 system_pods.go:89] "csi-hostpath-attacher-0" [2c53773b-3b70-4b61-a9fa-242a1091f327] Running
	I0729 18:20:33.729014 1063162 system_pods.go:89] "csi-hostpath-resizer-0" [612cf202-3d1a-4859-ad72-0b5bfc16aec6] Running
	I0729 18:20:33.729019 1063162 system_pods.go:89] "csi-hostpathplugin-sfz6c" [19694c0f-aaad-4ada-be53-34f11202d797] Running
	I0729 18:20:33.729023 1063162 system_pods.go:89] "etcd-addons-685520" [2ad20938-ce5a-499d-a013-72d8b49e61fb] Running
	I0729 18:20:33.729027 1063162 system_pods.go:89] "kube-apiserver-addons-685520" [3559744b-f0ab-4459-a201-ce4e37003789] Running
	I0729 18:20:33.729031 1063162 system_pods.go:89] "kube-controller-manager-addons-685520" [66f49f54-d749-452d-8f01-675f6f16e53c] Running
	I0729 18:20:33.729035 1063162 system_pods.go:89] "kube-ingress-dns-minikube" [a22a2df5-68df-492e-8478-b1fa2ed6d45a] Running
	I0729 18:20:33.729039 1063162 system_pods.go:89] "kube-proxy-bnslr" [dea08c83-eebf-47be-ba32-65ae4fd51a9b] Running
	I0729 18:20:33.729044 1063162 system_pods.go:89] "kube-scheduler-addons-685520" [d88158ca-7d50-455d-aa7b-9fc2ae7883d0] Running
	I0729 18:20:33.729047 1063162 system_pods.go:89] "metrics-server-c59844bb4-qt4qg" [46b5fee1-ed94-4adc-a131-a0d90438dbaf] Running
	I0729 18:20:33.729052 1063162 system_pods.go:89] "nvidia-device-plugin-daemonset-4bzd5" [0edbc902-4717-462e-8c98-1e0af3da0c72] Running
	I0729 18:20:33.729055 1063162 system_pods.go:89] "registry-698f998955-grn4f" [ae9be054-2ae9-4bb2-91af-3a601d969805] Running
	I0729 18:20:33.729059 1063162 system_pods.go:89] "registry-proxy-sxvm2" [07822b9d-56b6-4aab-bce3-512310b7497f] Running
	I0729 18:20:33.729063 1063162 system_pods.go:89] "snapshot-controller-745499f584-4x8xg" [5218abd3-b463-4a1f-9f77-df15193cea8f] Running
	I0729 18:20:33.729068 1063162 system_pods.go:89] "snapshot-controller-745499f584-8wwkm" [da494473-4096-4488-ace0-8361335052a0] Running
	I0729 18:20:33.729071 1063162 system_pods.go:89] "storage-provisioner" [6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea] Running
	I0729 18:20:33.729077 1063162 system_pods.go:89] "tiller-deploy-6677d64bcd-nl6s4" [018ede57-0c16-4231-aab9-8a15f104da71] Running
	I0729 18:20:33.729082 1063162 system_pods.go:126] duration metric: took 8.320881ms to wait for k8s-apps to be running ...
	I0729 18:20:33.729090 1063162 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:20:33.729136 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:20:33.745304 1063162 system_svc.go:56] duration metric: took 16.208296ms WaitForService to wait for kubelet
	I0729 18:20:33.745332 1063162 kubeadm.go:582] duration metric: took 2m11.413807992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:20:33.745360 1063162 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:20:33.748290 1063162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:20:33.748329 1063162 node_conditions.go:123] node cpu capacity is 2
	I0729 18:20:33.748357 1063162 node_conditions.go:105] duration metric: took 2.9898ms to run NodePressure ...
	I0729 18:20:33.748373 1063162 start.go:241] waiting for startup goroutines ...
	I0729 18:20:33.748397 1063162 start.go:246] waiting for cluster config update ...
	I0729 18:20:33.748424 1063162 start.go:255] writing updated cluster config ...
	I0729 18:20:33.748791 1063162 ssh_runner.go:195] Run: rm -f paused
	I0729 18:20:33.798446 1063162 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:20:33.801078 1063162 out.go:177] * Done! kubectl is now configured to use "addons-685520" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.466838261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277455466808799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0592122e-3aba-46ad-bf9e-d8e6f4420e87 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.467500542Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf3836a7-fa6e-452b-8885-ff7dd9282a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.467551231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf3836a7-fa6e-452b-8885-ff7dd9282a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.467927106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf3836a7-fa6e-452b-8885-ff7dd9282a4d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.510567335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d2b84bd-71b9-437c-b255-8567689b9af3 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.510635415Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d2b84bd-71b9-437c-b255-8567689b9af3 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.512006774Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c08de190-dddc-45d4-8343-023e7c68b7ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.513478597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277455513450612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c08de190-dddc-45d4-8343-023e7c68b7ba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.514214374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f6c2a99-bade-42fc-96b1-2e00e3382d9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.514266579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f6c2a99-bade-42fc-96b1-2e00e3382d9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.514549601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f6c2a99-bade-42fc-96b1-2e00e3382d9c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.554550533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32f2a2f3-590b-44f5-baf9-a195d1f2a496 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.554616460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32f2a2f3-590b-44f5-baf9-a195d1f2a496 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.555886888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98803f12-26b2-4f26-b6b8-b45ef947f869 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.557289181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277455557262308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98803f12-26b2-4f26-b6b8-b45ef947f869 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.557886877Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b2826fd-db8d-475a-a2e6-0a6cab965583 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.557940751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b2826fd-db8d-475a-a2e6-0a6cab965583 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.558154121Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b2826fd-db8d-475a-a2e6-0a6cab965583 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.591212045Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63f018ce-fa2a-41bf-a764-b76c9147fa29 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.591276825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63f018ce-fa2a-41bf-a764-b76c9147fa29 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.592505012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37341aa1-887f-44a1-93de-16b7676ad6c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.593742381Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277455593717142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37341aa1-887f-44a1-93de-16b7676ad6c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.594383182Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93f153f8-3f9e-4489-9f4f-e3ab3f1da99e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.594448651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93f153f8-3f9e-4489-9f4f-e3ab3f1da99e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:24:15 addons-685520 crio[681]: time="2024-07-29 18:24:15.594678688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93f153f8-3f9e-4489-9f4f-e3ab3f1da99e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a8934f2fd17e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app           0                   e5b59743c51ca       hello-world-app-6778b5fc9f-tp7mw
	e4970f8a9ad04       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         2 minutes ago       Running             nginx                     0                   73e5373798b82       nginx
	55f968b97ba4f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     3 minutes ago       Running             busybox                   0                   ff677a1186699       busybox
	a8a4c8627baff       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Running             metrics-server            0                   58e6a7d955255       metrics-server-c59844bb4-qt4qg
	e8128f8da9097       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        5 minutes ago       Running             storage-provisioner       0                   b9c555c1f6c67       storage-provisioner
	56985357a76b9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        5 minutes ago       Running             kube-proxy                0                   5600374c0d145       kube-proxy-bnslr
	0159416a2ffac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        5 minutes ago       Running             coredns                   0                   126407fabe6d2       coredns-7db6d8ff4d-zrfkz
	49bf3e5a91fe3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        6 minutes ago       Running             kube-scheduler            0                   93f73f83097ee       kube-scheduler-addons-685520
	793fd521a6ea1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   97f9cc8513240       etcd-addons-685520
	b87f1d7ad226d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        6 minutes ago       Running             kube-controller-manager   0                   086ec296b7437       kube-controller-manager-addons-685520
	2bdbc0aba106d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        6 minutes ago       Running             kube-apiserver            0                   83777ac8e6d35       kube-apiserver-addons-685520
	
	
	==> coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] <==
	[INFO] 10.244.0.7:39281 - 7376 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079172s
	[INFO] 10.244.0.7:57008 - 62999 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125036s
	[INFO] 10.244.0.7:57008 - 31509 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015527s
	[INFO] 10.244.0.7:36784 - 53223 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088208s
	[INFO] 10.244.0.7:36784 - 5609 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094027s
	[INFO] 10.244.0.7:56731 - 3156 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000347373s
	[INFO] 10.244.0.7:56731 - 64853 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000258199s
	[INFO] 10.244.0.7:51827 - 65463 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083192s
	[INFO] 10.244.0.7:51827 - 24746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108735s
	[INFO] 10.244.0.7:56101 - 602 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091257s
	[INFO] 10.244.0.7:56101 - 9300 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116538s
	[INFO] 10.244.0.7:48396 - 22087 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053048s
	[INFO] 10.244.0.7:48396 - 47174 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142858s
	[INFO] 10.244.0.7:52967 - 28914 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055137s
	[INFO] 10.244.0.7:52967 - 59632 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124939s
	[INFO] 10.244.0.22:40405 - 57447 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000393952s
	[INFO] 10.244.0.22:37578 - 19919 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014584s
	[INFO] 10.244.0.22:35083 - 37305 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128154s
	[INFO] 10.244.0.22:60865 - 40006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201672s
	[INFO] 10.244.0.22:42794 - 5524 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111693s
	[INFO] 10.244.0.22:48192 - 9971 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013231s
	[INFO] 10.244.0.22:43797 - 56541 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000789147s
	[INFO] 10.244.0.22:53124 - 34367 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000764487s
	[INFO] 10.244.0.27:51107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000509229s
	[INFO] 10.244.0.27:45153 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113603s
	
	
	==> describe nodes <==
	Name:               addons-685520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=addons-685520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_18_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685520
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:18:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685520
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:24:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:22:14 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:22:14 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:22:14 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:22:14 +0000   Mon, 29 Jul 2024 18:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    addons-685520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c4826ffa24b4f319f34facf10037875
	  System UUID:                7c4826ff-a24b-4f31-9f34-facf10037875
	  Boot ID:                    c1f46fab-e4b8-441b-80ae-779aec887efb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  default                     hello-world-app-6778b5fc9f-tp7mw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 coredns-7db6d8ff4d-zrfkz                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m54s
	  kube-system                 etcd-addons-685520                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-apiserver-addons-685520             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-controller-manager-addons-685520    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 kube-proxy-bnslr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-scheduler-addons-685520             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  kube-system                 metrics-server-c59844bb4-qt4qg           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         5m48s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m51s  kube-proxy       
	  Normal  Starting                 6m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s   kubelet          Node addons-685520 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s   kubelet          Node addons-685520 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s   kubelet          Node addons-685520 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m6s   kubelet          Node addons-685520 status is now: NodeReady
	  Normal  RegisteredNode           5m54s  node-controller  Node addons-685520 event: Registered Node addons-685520 in Controller
	
	
	==> dmesg <==
	[  +0.306093] systemd-fstab-generator[1656]: Ignoring "noauto" option for root device
	[  +4.840312] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.019022] kauditd_printk_skb: 142 callbacks suppressed
	[  +7.567498] kauditd_printk_skb: 73 callbacks suppressed
	[ +17.303906] kauditd_printk_skb: 11 callbacks suppressed
	[Jul29 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.712660] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.446440] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.032827] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.078661] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.899398] kauditd_printk_skb: 3 callbacks suppressed
	[ +14.989518] kauditd_printk_skb: 52 callbacks suppressed
	[Jul29 18:20] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.101362] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.873329] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.907354] kauditd_printk_skb: 66 callbacks suppressed
	[Jul29 18:21] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.369384] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.593467] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.080625] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.293047] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.421398] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.028511] kauditd_printk_skb: 66 callbacks suppressed
	[Jul29 18:24] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.318772] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] <==
	{"level":"warn","ts":"2024-07-29T18:19:18.532396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:18.133912Z","time spent":"398.459745ms","remote":"127.0.0.1:53408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" mod_revision:973 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" > >"}
	{"level":"warn","ts":"2024-07-29T18:19:18.532459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.432197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-07-29T18:19:18.532479Z","caller":"traceutil/trace.go:171","msg":"trace[2025520249] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1032; }","duration":"155.47173ms","start":"2024-07-29T18:19:18.377002Z","end":"2024-07-29T18:19:18.532473Z","steps":["trace[2025520249] 'agreement among raft nodes before linearized reading'  (duration: 155.403444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:18.532665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.419687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14465"}
	{"level":"info","ts":"2024-07-29T18:19:18.532681Z","caller":"traceutil/trace.go:171","msg":"trace[1053654952] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1032; }","duration":"104.460347ms","start":"2024-07-29T18:19:18.428216Z","end":"2024-07-29T18:19:18.532676Z","steps":["trace[1053654952] 'agreement among raft nodes before linearized reading'  (duration: 104.402746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.343748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"466.735214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-29T18:19:38.343829Z","caller":"traceutil/trace.go:171","msg":"trace[59053337] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1133; }","duration":"466.876332ms","start":"2024-07-29T18:19:37.876934Z","end":"2024-07-29T18:19:38.34381Z","steps":["trace[59053337] 'range keys from in-memory index tree'  (duration: 466.629382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.343862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:37.876921Z","time spent":"466.929983ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-29T18:19:38.344061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.565605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T18:19:38.344082Z","caller":"traceutil/trace.go:171","msg":"trace[1714875697] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1133; }","duration":"416.604745ms","start":"2024-07-29T18:19:37.927469Z","end":"2024-07-29T18:19:38.344074Z","steps":["trace[1714875697] 'range keys from in-memory index tree'  (duration: 416.475899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:37.927457Z","time spent":"416.636919ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-07-29T18:19:38.34421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.360619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T18:19:38.344227Z","caller":"traceutil/trace.go:171","msg":"trace[1584107679] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1133; }","duration":"297.402774ms","start":"2024-07-29T18:19:38.046819Z","end":"2024-07-29T18:19:38.344221Z","steps":["trace[1584107679] 'count revisions from in-memory index tree'  (duration: 297.32246ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.630555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-qt4qg\" ","response":"range_response_count:1 size:4458"}
	{"level":"info","ts":"2024-07-29T18:19:38.344511Z","caller":"traceutil/trace.go:171","msg":"trace[497087657] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-qt4qg; range_end:; response_count:1; response_revision:1133; }","duration":"172.69272ms","start":"2024-07-29T18:19:38.171812Z","end":"2024-07-29T18:19:38.344505Z","steps":["trace[497087657] 'range keys from in-memory index tree'  (duration: 172.559159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.781215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-07-29T18:19:38.344652Z","caller":"traceutil/trace.go:171","msg":"trace[409822624] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1133; }","duration":"161.816262ms","start":"2024-07-29T18:19:38.182829Z","end":"2024-07-29T18:19:38.344645Z","steps":["trace[409822624] 'range keys from in-memory index tree'  (duration: 161.714776ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:20:04.288606Z","caller":"traceutil/trace.go:171","msg":"trace[423974404] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"118.893405ms","start":"2024-07-29T18:20:04.169688Z","end":"2024-07-29T18:20:04.288582Z","steps":["trace[423974404] 'process raft request'  (duration: 118.797098ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:21:10.513738Z","caller":"traceutil/trace.go:171","msg":"trace[2100858930] linearizableReadLoop","detail":"{readStateIndex:1603; appliedIndex:1602; }","duration":"354.987839ms","start":"2024-07-29T18:21:10.158726Z","end":"2024-07-29T18:21:10.513713Z","steps":["trace[2100858930] 'read index received'  (duration: 354.830313ms)","trace[2100858930] 'applied index is now lower than readState.Index'  (duration: 157.066µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T18:21:10.513827Z","caller":"traceutil/trace.go:171","msg":"trace[1666936031] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"432.23568ms","start":"2024-07-29T18:21:10.081584Z","end":"2024-07-29T18:21:10.513819Z","steps":["trace[1666936031] 'process raft request'  (duration: 432.018871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:21:10.513948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:21:10.08157Z","time spent":"432.276269ms","remote":"127.0.0.1:53310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1539 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T18:21:10.514155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.436333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8910"}
	{"level":"info","ts":"2024-07-29T18:21:10.514195Z","caller":"traceutil/trace.go:171","msg":"trace[42663486] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1546; }","duration":"355.503247ms","start":"2024-07-29T18:21:10.158683Z","end":"2024-07-29T18:21:10.514187Z","steps":["trace[42663486] 'agreement among raft nodes before linearized reading'  (duration: 355.335226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:21:10.514216Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:21:10.15867Z","time spent":"355.542216ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8933,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-29T18:21:14.210035Z","caller":"traceutil/trace.go:171","msg":"trace[75383448] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1593; }","duration":"134.447971ms","start":"2024-07-29T18:21:14.075572Z","end":"2024-07-29T18:21:14.21002Z","steps":["trace[75383448] 'process raft request'  (duration: 134.324863ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:24:15 up 6 min,  0 users,  load average: 0.12, 0.57, 0.35
	Linux addons-685520 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] <==
	E0729 18:19:57.757028       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.101.170:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.101.170:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.101.170:443: connect: connection refused
	I0729 18:19:57.822448       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0729 18:20:43.260846       1 conn.go:339] Error on socket receive: read tcp 192.168.39.137:8443->192.168.39.1:57106: use of closed network connection
	E0729 18:20:43.475955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.137:8443->192.168.39.1:57132: use of closed network connection
	I0729 18:21:10.613935       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.148.30"}
	E0729 18:21:15.558203       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 18:21:17.136578       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 18:21:43.635216       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 18:21:43.649147       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.649188       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.713247       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.713358       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.717232       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.717825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.745560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.745648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.766704       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.766751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.822024       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0729 18:21:43.873986       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.104.252"}
	W0729 18:21:44.718625       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 18:21:44.767289       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 18:21:44.790446       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0729 18:21:44.885518       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 18:24:05.370994       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.86.176"}
	
	
	==> kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] <==
	W0729 18:22:49.827929       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:22:49.827979       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:23:02.297790       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:23:02.297841       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:23:11.837951       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:23:11.838035       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:23:26.563600       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:23:26.563640       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:23:33.864045       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:23:33.864116       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:23:49.496997       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:23:49.497286       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:24:01.176122       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:24:01.176279       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 18:24:05.212255       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="39.315276ms"
	I0729 18:24:05.225772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="13.408144ms"
	I0729 18:24:05.245622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="19.801199ms"
	I0729 18:24:05.245710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="45.541µs"
	I0729 18:24:07.046169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="24.555454ms"
	I0729 18:24:07.046628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="44.181µs"
	I0729 18:24:07.637221       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0729 18:24:07.641897       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0729 18:24:07.642803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="5.575µs"
	W0729 18:24:07.980078       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:24:07.980184       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] <==
	I0729 18:18:24.042740       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:18:24.057503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0729 18:18:24.139959       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:18:24.140009       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:18:24.140029       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:18:24.147563       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:18:24.147798       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:18:24.147811       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:18:24.151343       1 config.go:192] "Starting service config controller"
	I0729 18:18:24.151352       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:18:24.154417       1 config.go:319] "Starting node config controller"
	I0729 18:18:24.154473       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:18:24.155646       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:18:24.155671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:18:24.155678       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:18:24.251566       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:18:24.254751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] <==
	W0729 18:18:05.776890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:18:05.777694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:18:05.776923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 18:18:05.777791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 18:18:05.776929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:18:05.777889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 18:18:05.776943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:18:05.777937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:18:06.585850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:18:06.585901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:18:06.587841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.587885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:06.810793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:18:06.810917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 18:18:06.836044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:18:06.836097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 18:18:06.870997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:18:06.871047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:18:06.907395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.907442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:06.979760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.979814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:07.026632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:18:07.027353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0729 18:18:07.368670       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:24:07 addons-685520 kubelet[1275]: I0729 18:24:07.033041    1275 scope.go:117] "RemoveContainer" containerID="0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da"
	Jul 29 18:24:07 addons-685520 kubelet[1275]: E0729 18:24:07.038082    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da\": container with ID starting with 0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da not found: ID does not exist" containerID="0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da"
	Jul 29 18:24:07 addons-685520 kubelet[1275]: I0729 18:24:07.038119    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da"} err="failed to get container status \"0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da\": rpc error: code = NotFound desc = could not find container \"0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da\": container with ID starting with 0e80b526c600d907282e6ed534033af8d122b5c84d08a012c6c2fdf30a9d05da not found: ID does not exist"
	Jul 29 18:24:08 addons-685520 kubelet[1275]: I0729 18:24:08.346241    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f514b8-87ca-4af2-9047-4e5806ee59e2" path="/var/lib/kubelet/pods/70f514b8-87ca-4af2-9047-4e5806ee59e2/volumes"
	Jul 29 18:24:08 addons-685520 kubelet[1275]: I0729 18:24:08.347044    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22a2df5-68df-492e-8478-b1fa2ed6d45a" path="/var/lib/kubelet/pods/a22a2df5-68df-492e-8478-b1fa2ed6d45a/volumes"
	Jul 29 18:24:08 addons-685520 kubelet[1275]: I0729 18:24:08.347532    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8df85f5-c504-4eb4-97e2-8fd28207508a" path="/var/lib/kubelet/pods/d8df85f5-c504-4eb4-97e2-8fd28207508a/volumes"
	Jul 29 18:24:08 addons-685520 kubelet[1275]: E0729 18:24:08.359372    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:24:08 addons-685520 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:24:08 addons-685520 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:24:08 addons-685520 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:24:08 addons-685520 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.855467    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pk77\" (UniqueName: \"kubernetes.io/projected/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-kube-api-access-8pk77\") pod \"fad08fc5-b102-45b7-8f82-4cd1aaf999bb\" (UID: \"fad08fc5-b102-45b7-8f82-4cd1aaf999bb\") "
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.855508    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-webhook-cert\") pod \"fad08fc5-b102-45b7-8f82-4cd1aaf999bb\" (UID: \"fad08fc5-b102-45b7-8f82-4cd1aaf999bb\") "
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.857940    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "fad08fc5-b102-45b7-8f82-4cd1aaf999bb" (UID: "fad08fc5-b102-45b7-8f82-4cd1aaf999bb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.858781    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-kube-api-access-8pk77" (OuterVolumeSpecName: "kube-api-access-8pk77") pod "fad08fc5-b102-45b7-8f82-4cd1aaf999bb" (UID: "fad08fc5-b102-45b7-8f82-4cd1aaf999bb"). InnerVolumeSpecName "kube-api-access-8pk77". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.956439    1275 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8pk77\" (UniqueName: \"kubernetes.io/projected/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-kube-api-access-8pk77\") on node \"addons-685520\" DevicePath \"\""
	Jul 29 18:24:10 addons-685520 kubelet[1275]: I0729 18:24:10.956512    1275 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fad08fc5-b102-45b7-8f82-4cd1aaf999bb-webhook-cert\") on node \"addons-685520\" DevicePath \"\""
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.028772    1275 scope.go:117] "RemoveContainer" containerID="15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.044203    1275 scope.go:117] "RemoveContainer" containerID="15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: E0729 18:24:11.044649    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": container with ID starting with 15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e not found: ID does not exist" containerID="15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.044674    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"} err="failed to get container status \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": rpc error: code = NotFound desc = could not find container \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": container with ID starting with 15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e not found: ID does not exist"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.722920    1275 scope.go:117] "RemoveContainer" containerID="21b0fa7df727282e3ec85149e1cae6bee4d43497d9cb289863a35bf0a37df5b3"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.743592    1275 scope.go:117] "RemoveContainer" containerID="e525f0c011f56e33f137d81ef557697b5f948b96b56aed44c33c6575d844ac70"
	Jul 29 18:24:12 addons-685520 kubelet[1275]: I0729 18:24:12.346871    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad08fc5-b102-45b7-8f82-4cd1aaf999bb" path="/var/lib/kubelet/pods/fad08fc5-b102-45b7-8f82-4cd1aaf999bb/volumes"
	Jul 29 18:24:15 addons-685520 kubelet[1275]: I0729 18:24:15.342170    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477] <==
	I0729 18:18:28.552454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:18:28.576116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:18:28.576241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:18:28.593261       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:18:28.593732       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"017dbfb7-331e-4c81-9d3c-fe968cce6ad0", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8 became leader
	I0729 18:18:28.593766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8!
	I0729 18:18:28.696379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685520 -n addons-685520
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (350.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.750338ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-qt4qg" [46b5fee1-ed94-4adc-a131-a0d90438dbaf] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005231082s
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (72.853775ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 2m58.459152687s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (73.297299ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 3m1.358220646s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (72.065318ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 3m8.052376738s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (73.583139ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 3m17.428057004s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (63.973302ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 3m28.258424434s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (65.710013ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 3m45.996523178s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (67.963484ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 4m5.082074576s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (64.81315ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 4m52.765541577s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (66.295934ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 6m5.800820729s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (69.199012ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 7m12.944145018s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (65.988828ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 7m53.448243298s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-685520 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685520 top pods -n kube-system: exit status 1 (62.840114ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-zrfkz, age: 8m41.495533848s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685520 -n addons-685520
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 logs -n 25: (1.247050337s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-881045                                                                     | download-only-881045 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-644927 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | binary-mirror-644927                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:46501                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-644927                                                                     | binary-mirror-644927 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-685520 --wait=true                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:20 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:21 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-685520 ssh cat                                                                       | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:20 UTC |
	|         | /opt/local-path-provisioner/pvc-144acf15-a758-428b-874b-327ac7591c4a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:20 UTC | 29 Jul 24 18:21 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-685520 ip                                                                            | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | -p addons-685520                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | -p addons-685520                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC | 29 Jul 24 18:21 UTC |
	|         | addons-685520                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-685520 ssh curl -s                                                                   | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:21 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-685520 ip                                                                            | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-685520 addons disable                                                                | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:24 UTC | 29 Jul 24 18:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-685520 addons                                                                        | addons-685520        | jenkins | v1.33.1 | 29 Jul 24 18:27 UTC | 29 Jul 24 18:27 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:17:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:17:26.633148 1063162 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:17:26.633416 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:26.633426 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:17:26.633430 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:26.633654 1063162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:17:26.634291 1063162 out.go:298] Setting JSON to false
	I0729 18:17:26.635375 1063162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7199,"bootTime":1722269848,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:17:26.635441 1063162 start.go:139] virtualization: kvm guest
	I0729 18:17:26.637297 1063162 out.go:177] * [addons-685520] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:17:26.638755 1063162 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:17:26.638788 1063162 notify.go:220] Checking for updates...
	I0729 18:17:26.641112 1063162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:17:26.642122 1063162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:17:26.643200 1063162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:26.644236 1063162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:17:26.645258 1063162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:17:26.646416 1063162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:17:26.677411 1063162 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:17:26.678392 1063162 start.go:297] selected driver: kvm2
	I0729 18:17:26.678402 1063162 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:17:26.678413 1063162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:17:26.679131 1063162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:26.679198 1063162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:17:26.693127 1063162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:17:26.693179 1063162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:17:26.693469 1063162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:17:26.693507 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:17:26.693518 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:26.693531 1063162 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:17:26.693608 1063162 start.go:340] cluster config:
	{Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:26.693730 1063162 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:26.695130 1063162 out.go:177] * Starting "addons-685520" primary control-plane node in "addons-685520" cluster
	I0729 18:17:26.696219 1063162 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:17:26.696244 1063162 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:17:26.696254 1063162 cache.go:56] Caching tarball of preloaded images
	I0729 18:17:26.696322 1063162 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:17:26.696335 1063162 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:17:26.696674 1063162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json ...
	I0729 18:17:26.696699 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json: {Name:mkb3f974718ada620a37bb6878ab326cdb2590b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:26.696837 1063162 start.go:360] acquireMachinesLock for addons-685520: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:17:26.696894 1063162 start.go:364] duration metric: took 40.511µs to acquireMachinesLock for "addons-685520"
	I0729 18:17:26.696915 1063162 start.go:93] Provisioning new machine with config: &{Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:17:26.697023 1063162 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:17:26.698277 1063162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 18:17:26.698403 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:17:26.698437 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:17:26.712265 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0729 18:17:26.712662 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:17:26.713191 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:17:26.713211 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:17:26.713561 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:17:26.713791 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:26.713949 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:26.714113 1063162 start.go:159] libmachine.API.Create for "addons-685520" (driver="kvm2")
	I0729 18:17:26.714140 1063162 client.go:168] LocalClient.Create starting
	I0729 18:17:26.714184 1063162 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:17:26.771273 1063162 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:17:26.972281 1063162 main.go:141] libmachine: Running pre-create checks...
	I0729 18:17:26.972307 1063162 main.go:141] libmachine: (addons-685520) Calling .PreCreateCheck
	I0729 18:17:26.972824 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:26.973231 1063162 main.go:141] libmachine: Creating machine...
	I0729 18:17:26.973245 1063162 main.go:141] libmachine: (addons-685520) Calling .Create
	I0729 18:17:26.973378 1063162 main.go:141] libmachine: (addons-685520) Creating KVM machine...
	I0729 18:17:26.974575 1063162 main.go:141] libmachine: (addons-685520) DBG | found existing default KVM network
	I0729 18:17:26.975532 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:26.975372 1063183 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0729 18:17:26.975575 1063162 main.go:141] libmachine: (addons-685520) DBG | created network xml: 
	I0729 18:17:26.975595 1063162 main.go:141] libmachine: (addons-685520) DBG | <network>
	I0729 18:17:26.975606 1063162 main.go:141] libmachine: (addons-685520) DBG |   <name>mk-addons-685520</name>
	I0729 18:17:26.975616 1063162 main.go:141] libmachine: (addons-685520) DBG |   <dns enable='no'/>
	I0729 18:17:26.975642 1063162 main.go:141] libmachine: (addons-685520) DBG |   
	I0729 18:17:26.975657 1063162 main.go:141] libmachine: (addons-685520) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:17:26.975663 1063162 main.go:141] libmachine: (addons-685520) DBG |     <dhcp>
	I0729 18:17:26.975671 1063162 main.go:141] libmachine: (addons-685520) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:17:26.975679 1063162 main.go:141] libmachine: (addons-685520) DBG |     </dhcp>
	I0729 18:17:26.975683 1063162 main.go:141] libmachine: (addons-685520) DBG |   </ip>
	I0729 18:17:26.975690 1063162 main.go:141] libmachine: (addons-685520) DBG |   
	I0729 18:17:26.975694 1063162 main.go:141] libmachine: (addons-685520) DBG | </network>
	I0729 18:17:26.975703 1063162 main.go:141] libmachine: (addons-685520) DBG | 
	I0729 18:17:26.980914 1063162 main.go:141] libmachine: (addons-685520) DBG | trying to create private KVM network mk-addons-685520 192.168.39.0/24...
	I0729 18:17:27.043359 1063162 main.go:141] libmachine: (addons-685520) DBG | private KVM network mk-addons-685520 192.168.39.0/24 created
	I0729 18:17:27.043393 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.043299 1063183 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:27.043407 1063162 main.go:141] libmachine: (addons-685520) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 ...
	I0729 18:17:27.043431 1063162 main.go:141] libmachine: (addons-685520) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:17:27.043458 1063162 main.go:141] libmachine: (addons-685520) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:17:27.306761 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.306633 1063183 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa...
	I0729 18:17:27.455576 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.455436 1063183 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/addons-685520.rawdisk...
	I0729 18:17:27.455610 1063162 main.go:141] libmachine: (addons-685520) DBG | Writing magic tar header
	I0729 18:17:27.455620 1063162 main.go:141] libmachine: (addons-685520) DBG | Writing SSH key tar header
	I0729 18:17:27.455628 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:27.455550 1063183 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 ...
	I0729 18:17:27.455639 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520
	I0729 18:17:27.455743 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520 (perms=drwx------)
	I0729 18:17:27.455767 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:17:27.455794 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:17:27.455806 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:17:27.455822 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:17:27.455831 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:17:27.455844 1063162 main.go:141] libmachine: (addons-685520) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:17:27.455859 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:27.455870 1063162 main.go:141] libmachine: (addons-685520) Creating domain...
	I0729 18:17:27.455884 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:17:27.455897 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:17:27.455910 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:17:27.455918 1063162 main.go:141] libmachine: (addons-685520) DBG | Checking permissions on dir: /home
	I0729 18:17:27.455941 1063162 main.go:141] libmachine: (addons-685520) DBG | Skipping /home - not owner
	I0729 18:17:27.456879 1063162 main.go:141] libmachine: (addons-685520) define libvirt domain using xml: 
	I0729 18:17:27.456898 1063162 main.go:141] libmachine: (addons-685520) <domain type='kvm'>
	I0729 18:17:27.456906 1063162 main.go:141] libmachine: (addons-685520)   <name>addons-685520</name>
	I0729 18:17:27.456911 1063162 main.go:141] libmachine: (addons-685520)   <memory unit='MiB'>4000</memory>
	I0729 18:17:27.456916 1063162 main.go:141] libmachine: (addons-685520)   <vcpu>2</vcpu>
	I0729 18:17:27.456924 1063162 main.go:141] libmachine: (addons-685520)   <features>
	I0729 18:17:27.456929 1063162 main.go:141] libmachine: (addons-685520)     <acpi/>
	I0729 18:17:27.456933 1063162 main.go:141] libmachine: (addons-685520)     <apic/>
	I0729 18:17:27.456938 1063162 main.go:141] libmachine: (addons-685520)     <pae/>
	I0729 18:17:27.456942 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.456947 1063162 main.go:141] libmachine: (addons-685520)   </features>
	I0729 18:17:27.456954 1063162 main.go:141] libmachine: (addons-685520)   <cpu mode='host-passthrough'>
	I0729 18:17:27.456959 1063162 main.go:141] libmachine: (addons-685520)   
	I0729 18:17:27.456964 1063162 main.go:141] libmachine: (addons-685520)   </cpu>
	I0729 18:17:27.456969 1063162 main.go:141] libmachine: (addons-685520)   <os>
	I0729 18:17:27.456974 1063162 main.go:141] libmachine: (addons-685520)     <type>hvm</type>
	I0729 18:17:27.456979 1063162 main.go:141] libmachine: (addons-685520)     <boot dev='cdrom'/>
	I0729 18:17:27.456985 1063162 main.go:141] libmachine: (addons-685520)     <boot dev='hd'/>
	I0729 18:17:27.456991 1063162 main.go:141] libmachine: (addons-685520)     <bootmenu enable='no'/>
	I0729 18:17:27.456995 1063162 main.go:141] libmachine: (addons-685520)   </os>
	I0729 18:17:27.457012 1063162 main.go:141] libmachine: (addons-685520)   <devices>
	I0729 18:17:27.457031 1063162 main.go:141] libmachine: (addons-685520)     <disk type='file' device='cdrom'>
	I0729 18:17:27.457041 1063162 main.go:141] libmachine: (addons-685520)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/boot2docker.iso'/>
	I0729 18:17:27.457048 1063162 main.go:141] libmachine: (addons-685520)       <target dev='hdc' bus='scsi'/>
	I0729 18:17:27.457054 1063162 main.go:141] libmachine: (addons-685520)       <readonly/>
	I0729 18:17:27.457061 1063162 main.go:141] libmachine: (addons-685520)     </disk>
	I0729 18:17:27.457067 1063162 main.go:141] libmachine: (addons-685520)     <disk type='file' device='disk'>
	I0729 18:17:27.457075 1063162 main.go:141] libmachine: (addons-685520)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:17:27.457083 1063162 main.go:141] libmachine: (addons-685520)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/addons-685520.rawdisk'/>
	I0729 18:17:27.457090 1063162 main.go:141] libmachine: (addons-685520)       <target dev='hda' bus='virtio'/>
	I0729 18:17:27.457095 1063162 main.go:141] libmachine: (addons-685520)     </disk>
	I0729 18:17:27.457106 1063162 main.go:141] libmachine: (addons-685520)     <interface type='network'>
	I0729 18:17:27.457126 1063162 main.go:141] libmachine: (addons-685520)       <source network='mk-addons-685520'/>
	I0729 18:17:27.457145 1063162 main.go:141] libmachine: (addons-685520)       <model type='virtio'/>
	I0729 18:17:27.457155 1063162 main.go:141] libmachine: (addons-685520)     </interface>
	I0729 18:17:27.457165 1063162 main.go:141] libmachine: (addons-685520)     <interface type='network'>
	I0729 18:17:27.457177 1063162 main.go:141] libmachine: (addons-685520)       <source network='default'/>
	I0729 18:17:27.457187 1063162 main.go:141] libmachine: (addons-685520)       <model type='virtio'/>
	I0729 18:17:27.457198 1063162 main.go:141] libmachine: (addons-685520)     </interface>
	I0729 18:17:27.457208 1063162 main.go:141] libmachine: (addons-685520)     <serial type='pty'>
	I0729 18:17:27.457219 1063162 main.go:141] libmachine: (addons-685520)       <target port='0'/>
	I0729 18:17:27.457230 1063162 main.go:141] libmachine: (addons-685520)     </serial>
	I0729 18:17:27.457242 1063162 main.go:141] libmachine: (addons-685520)     <console type='pty'>
	I0729 18:17:27.457253 1063162 main.go:141] libmachine: (addons-685520)       <target type='serial' port='0'/>
	I0729 18:17:27.457264 1063162 main.go:141] libmachine: (addons-685520)     </console>
	I0729 18:17:27.457274 1063162 main.go:141] libmachine: (addons-685520)     <rng model='virtio'>
	I0729 18:17:27.457286 1063162 main.go:141] libmachine: (addons-685520)       <backend model='random'>/dev/random</backend>
	I0729 18:17:27.457297 1063162 main.go:141] libmachine: (addons-685520)     </rng>
	I0729 18:17:27.457308 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.457316 1063162 main.go:141] libmachine: (addons-685520)     
	I0729 18:17:27.457327 1063162 main.go:141] libmachine: (addons-685520)   </devices>
	I0729 18:17:27.457337 1063162 main.go:141] libmachine: (addons-685520) </domain>
	I0729 18:17:27.457349 1063162 main.go:141] libmachine: (addons-685520) 
	I0729 18:17:27.462907 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:1e:40:45 in network default
	I0729 18:17:27.463376 1063162 main.go:141] libmachine: (addons-685520) Ensuring networks are active...
	I0729 18:17:27.463390 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:27.464025 1063162 main.go:141] libmachine: (addons-685520) Ensuring network default is active
	I0729 18:17:27.464319 1063162 main.go:141] libmachine: (addons-685520) Ensuring network mk-addons-685520 is active
	I0729 18:17:27.465941 1063162 main.go:141] libmachine: (addons-685520) Getting domain xml...
	I0729 18:17:27.466655 1063162 main.go:141] libmachine: (addons-685520) Creating domain...
	I0729 18:17:28.694227 1063162 main.go:141] libmachine: (addons-685520) Waiting to get IP...
	I0729 18:17:28.694946 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:28.695317 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:28.695357 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:28.695289 1063183 retry.go:31] will retry after 285.397876ms: waiting for machine to come up
	I0729 18:17:28.982886 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:28.983272 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:28.983301 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:28.983234 1063183 retry.go:31] will retry after 258.835712ms: waiting for machine to come up
	I0729 18:17:29.244997 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:29.245418 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:29.245446 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:29.245344 1063183 retry.go:31] will retry after 378.941166ms: waiting for machine to come up
	I0729 18:17:29.626029 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:29.626403 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:29.626427 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:29.626344 1063183 retry.go:31] will retry after 593.378281ms: waiting for machine to come up
	I0729 18:17:30.221096 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:30.221580 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:30.221610 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:30.221546 1063183 retry.go:31] will retry after 483.770321ms: waiting for machine to come up
	I0729 18:17:30.707391 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:30.707819 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:30.707848 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:30.707768 1063183 retry.go:31] will retry after 768.217023ms: waiting for machine to come up
	I0729 18:17:31.477691 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:31.478059 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:31.478111 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:31.478010 1063183 retry.go:31] will retry after 853.729951ms: waiting for machine to come up
	I0729 18:17:32.332902 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:32.333238 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:32.333263 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:32.333187 1063183 retry.go:31] will retry after 1.462722028s: waiting for machine to come up
	I0729 18:17:33.797920 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:33.798240 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:33.798269 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:33.798203 1063183 retry.go:31] will retry after 1.301641374s: waiting for machine to come up
	I0729 18:17:35.101553 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:35.101978 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:35.102008 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:35.101914 1063183 retry.go:31] will retry after 1.732879428s: waiting for machine to come up
	I0729 18:17:36.836789 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:36.837227 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:36.837258 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:36.837176 1063183 retry.go:31] will retry after 2.830287802s: waiting for machine to come up
	I0729 18:17:39.668551 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:39.668906 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:39.668935 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:39.668849 1063183 retry.go:31] will retry after 2.912144664s: waiting for machine to come up
	I0729 18:17:42.582296 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:42.582640 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:42.582664 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:42.582616 1063183 retry.go:31] will retry after 4.044303851s: waiting for machine to come up
	I0729 18:17:46.631668 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:46.632062 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find current IP address of domain addons-685520 in network mk-addons-685520
	I0729 18:17:46.632093 1063162 main.go:141] libmachine: (addons-685520) DBG | I0729 18:17:46.632016 1063183 retry.go:31] will retry after 4.332408449s: waiting for machine to come up
	I0729 18:17:50.967922 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:50.968392 1063162 main.go:141] libmachine: (addons-685520) Found IP for machine: 192.168.39.137
	I0729 18:17:50.968413 1063162 main.go:141] libmachine: (addons-685520) Reserving static IP address...
	I0729 18:17:50.968427 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has current primary IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:50.968782 1063162 main.go:141] libmachine: (addons-685520) DBG | unable to find host DHCP lease matching {name: "addons-685520", mac: "52:54:00:5a:98:d7", ip: "192.168.39.137"} in network mk-addons-685520
	I0729 18:17:51.080239 1063162 main.go:141] libmachine: (addons-685520) DBG | Getting to WaitForSSH function...
	I0729 18:17:51.080270 1063162 main.go:141] libmachine: (addons-685520) Reserved static IP address: 192.168.39.137
	I0729 18:17:51.080284 1063162 main.go:141] libmachine: (addons-685520) Waiting for SSH to be available...
	I0729 18:17:51.082620 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.083014 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.083039 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.083269 1063162 main.go:141] libmachine: (addons-685520) DBG | Using SSH client type: external
	I0729 18:17:51.083299 1063162 main.go:141] libmachine: (addons-685520) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa (-rw-------)
	I0729 18:17:51.083331 1063162 main.go:141] libmachine: (addons-685520) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:17:51.083352 1063162 main.go:141] libmachine: (addons-685520) DBG | About to run SSH command:
	I0729 18:17:51.083368 1063162 main.go:141] libmachine: (addons-685520) DBG | exit 0
	I0729 18:17:51.211082 1063162 main.go:141] libmachine: (addons-685520) DBG | SSH cmd err, output: <nil>: 
	I0729 18:17:51.211420 1063162 main.go:141] libmachine: (addons-685520) KVM machine creation complete!
	I0729 18:17:51.211688 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:51.225148 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:51.225476 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:51.225686 1063162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:17:51.225702 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:17:51.226931 1063162 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:17:51.226949 1063162 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:17:51.226958 1063162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:17:51.226966 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.229414 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.229734 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.229765 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.229858 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.230009 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.230175 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.230338 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.230496 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.230727 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.230742 1063162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:17:51.330175 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:51.330202 1063162 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:17:51.330210 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.332848 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.333233 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.333265 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.333387 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.333628 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.333806 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.333926 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.334086 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.334306 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.334319 1063162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:17:51.435676 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:17:51.435784 1063162 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:17:51.435794 1063162 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:17:51.435802 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.436065 1063162 buildroot.go:166] provisioning hostname "addons-685520"
	I0729 18:17:51.436109 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.436322 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.438999 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.439359 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.439383 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.439538 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.439802 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.440053 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.440215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.440369 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.440545 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.440556 1063162 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685520 && echo "addons-685520" | sudo tee /etc/hostname
	I0729 18:17:51.553992 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685520
	
	I0729 18:17:51.554026 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.556561 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.556885 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.556914 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.557006 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.557215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.557375 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.557522 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.557684 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.557885 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.557907 1063162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685520/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:17:51.669488 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:17:51.669525 1063162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:17:51.669575 1063162 buildroot.go:174] setting up certificates
	I0729 18:17:51.669585 1063162 provision.go:84] configureAuth start
	I0729 18:17:51.669596 1063162 main.go:141] libmachine: (addons-685520) Calling .GetMachineName
	I0729 18:17:51.669874 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:51.672562 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.672893 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.672921 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.673069 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.674837 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.675097 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.675120 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.675257 1063162 provision.go:143] copyHostCerts
	I0729 18:17:51.675325 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:17:51.693643 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:17:51.693783 1063162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:17:51.693889 1063162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.addons-685520 san=[127.0.0.1 192.168.39.137 addons-685520 localhost minikube]
	I0729 18:17:51.781189 1063162 provision.go:177] copyRemoteCerts
	I0729 18:17:51.781280 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:17:51.781321 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.783881 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.784176 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.784209 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.784402 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.784603 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.784784 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.784929 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:51.865611 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:17:51.889380 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:17:51.912104 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:17:51.934755 1063162 provision.go:87] duration metric: took 265.153857ms to configureAuth
	I0729 18:17:51.934788 1063162 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:17:51.935033 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:17:51.935154 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:51.937640 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.937972 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:51.937998 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:51.938234 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:51.938433 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.938575 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:51.938706 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:51.938868 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:51.939079 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:51.939096 1063162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:17:52.219421 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:17:52.219449 1063162 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:17:52.219457 1063162 main.go:141] libmachine: (addons-685520) Calling .GetURL
	I0729 18:17:52.220680 1063162 main.go:141] libmachine: (addons-685520) DBG | Using libvirt version 6000000
	I0729 18:17:52.222536 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.222891 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.222922 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.223042 1063162 main.go:141] libmachine: Docker is up and running!
	I0729 18:17:52.223059 1063162 main.go:141] libmachine: Reticulating splines...
	I0729 18:17:52.223069 1063162 client.go:171] duration metric: took 25.508917331s to LocalClient.Create
	I0729 18:17:52.223100 1063162 start.go:167] duration metric: took 25.508987948s to libmachine.API.Create "addons-685520"
	I0729 18:17:52.223114 1063162 start.go:293] postStartSetup for "addons-685520" (driver="kvm2")
	I0729 18:17:52.223128 1063162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:17:52.223154 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.223363 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:17:52.223390 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.225300 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.225623 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.225654 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.225735 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.225910 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.226053 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.226178 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.304794 1063162 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:17:52.308972 1063162 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:17:52.308998 1063162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:17:52.309067 1063162 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:17:52.309095 1063162 start.go:296] duration metric: took 85.973543ms for postStartSetup
	I0729 18:17:52.309154 1063162 main.go:141] libmachine: (addons-685520) Calling .GetConfigRaw
	I0729 18:17:52.309799 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:52.312220 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.312529 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.312563 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.312802 1063162 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/config.json ...
	I0729 18:17:52.312991 1063162 start.go:128] duration metric: took 25.615954023s to createHost
	I0729 18:17:52.313017 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.314926 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.315206 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.315225 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.315372 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.315557 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.315721 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.315830 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.315986 1063162 main.go:141] libmachine: Using SSH client type: native
	I0729 18:17:52.316147 1063162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.137 22 <nil> <nil>}
	I0729 18:17:52.316157 1063162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:17:52.415668 1063162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722277072.398150012
	
	I0729 18:17:52.415697 1063162 fix.go:216] guest clock: 1722277072.398150012
	I0729 18:17:52.415707 1063162 fix.go:229] Guest: 2024-07-29 18:17:52.398150012 +0000 UTC Remote: 2024-07-29 18:17:52.31300445 +0000 UTC m=+25.712117221 (delta=85.145562ms)
	I0729 18:17:52.415769 1063162 fix.go:200] guest clock delta is within tolerance: 85.145562ms
	I0729 18:17:52.415777 1063162 start.go:83] releasing machines lock for "addons-685520", held for 25.718870288s
	I0729 18:17:52.415810 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.416109 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:52.418579 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.418968 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.419002 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.419141 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419596 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419781 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:17:52.419898 1063162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:17:52.419958 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.420004 1063162 ssh_runner.go:195] Run: cat /version.json
	I0729 18:17:52.420033 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:17:52.422381 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422618 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422725 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.422751 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.422884 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:52.422895 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.422910 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:52.423104 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.423120 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:17:52.423292 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:17:52.423302 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.423473 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:17:52.423478 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.423585 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:17:52.495937 1063162 ssh_runner.go:195] Run: systemctl --version
	I0729 18:17:52.521164 1063162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:17:52.674215 1063162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:17:52.679991 1063162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:17:52.680059 1063162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:17:52.695810 1063162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:17:52.695833 1063162 start.go:495] detecting cgroup driver to use...
	I0729 18:17:52.695899 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:17:52.712166 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:17:52.725431 1063162 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:17:52.725487 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:17:52.738640 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:17:52.751930 1063162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:17:52.859138 1063162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:17:52.984987 1063162 docker.go:233] disabling docker service ...
	I0729 18:17:52.985064 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:17:52.998872 1063162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:17:53.011183 1063162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:17:53.143979 1063162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:17:53.254262 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:17:53.267347 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:17:53.284938 1063162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:17:53.285001 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.294699 1063162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:17:53.294775 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.304377 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.313903 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.323393 1063162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:17:53.333148 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.342512 1063162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.358251 1063162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:17:53.367577 1063162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:17:53.375985 1063162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:17:53.376025 1063162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:17:53.387531 1063162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:17:53.396063 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:53.506702 1063162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:17:53.637567 1063162 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:17:53.637671 1063162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:17:53.642267 1063162 start.go:563] Will wait 60s for crictl version
	I0729 18:17:53.642319 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:17:53.645902 1063162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:17:53.687089 1063162 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:17:53.687212 1063162 ssh_runner.go:195] Run: crio --version
	I0729 18:17:53.713135 1063162 ssh_runner.go:195] Run: crio --version
	I0729 18:17:53.740297 1063162 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:17:53.741307 1063162 main.go:141] libmachine: (addons-685520) Calling .GetIP
	I0729 18:17:53.743773 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:53.744082 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:17:53.744128 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:17:53.744268 1063162 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:17:53.747913 1063162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:53.759660 1063162 kubeadm.go:883] updating cluster {Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:17:53.759770 1063162 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:17:53.759810 1063162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:53.790863 1063162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:17:53.790955 1063162 ssh_runner.go:195] Run: which lz4
	I0729 18:17:53.794728 1063162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:17:53.798704 1063162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:17:53.798733 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:17:55.120480 1063162 crio.go:462] duration metric: took 1.325790418s to copy over tarball
	I0729 18:17:55.120550 1063162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:17:57.319798 1063162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.199214521s)
	I0729 18:17:57.319829 1063162 crio.go:469] duration metric: took 2.199321686s to extract the tarball
	I0729 18:17:57.319837 1063162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:17:57.358781 1063162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:17:57.402948 1063162 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:17:57.402977 1063162 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:17:57.402986 1063162 kubeadm.go:934] updating node { 192.168.39.137 8443 v1.30.3 crio true true} ...
	I0729 18:17:57.403096 1063162 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-685520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:17:57.403164 1063162 ssh_runner.go:195] Run: crio config
	I0729 18:17:57.460297 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:17:57.460321 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:57.460333 1063162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:17:57.460365 1063162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.137 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685520 NodeName:addons-685520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:17:57.460606 1063162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685520"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:17:57.460686 1063162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:17:57.470694 1063162 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:17:57.470769 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 18:17:57.480262 1063162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:17:57.496454 1063162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:17:57.512246 1063162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 18:17:57.527977 1063162 ssh_runner.go:195] Run: grep 192.168.39.137	control-plane.minikube.internal$ /etc/hosts
	I0729 18:17:57.531655 1063162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:17:57.543693 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:17:57.671958 1063162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:17:57.688826 1063162 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520 for IP: 192.168.39.137
	I0729 18:17:57.688850 1063162 certs.go:194] generating shared ca certs ...
	I0729 18:17:57.688875 1063162 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.689025 1063162 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:17:57.862993 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt ...
	I0729 18:17:57.863027 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt: {Name:mk9f304cf49c7d2aa9b461e4f3ca18d09f0cad83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.863208 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key ...
	I0729 18:17:57.863219 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key: {Name:mkab627b76824e32d9f70531bc9f1fd6eeb74b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.863295 1063162 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:17:57.952352 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt ...
	I0729 18:17:57.952381 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt: {Name:mk12989f28a3c6ca3daca4dc40bcb2f8edc6b8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.952535 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key ...
	I0729 18:17:57.952548 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key: {Name:mk7dace36ae553a84062c4457d59130f9f8809f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:57.952638 1063162 certs.go:256] generating profile certs ...
	I0729 18:17:57.952711 1063162 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key
	I0729 18:17:57.952725 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt with IP's: []
	I0729 18:17:58.011535 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt ...
	I0729 18:17:58.011565 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: {Name:mk969dbd6edf45753b9b2fba68004f24b24fa7c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.011717 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key ...
	I0729 18:17:58.011728 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.key: {Name:mk12e25d6383e86bb755720ac4be733251d0e975 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.011791 1063162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2
	I0729 18:17:58.011809 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.137]
	I0729 18:17:58.107099 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 ...
	I0729 18:17:58.107130 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2: {Name:mke734cbe519b247c9a5babef04cba4185efc323 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.107289 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2 ...
	I0729 18:17:58.107302 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2: {Name:mkd3dc52b7d7fc07280178029f636d3cacf4490e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.107362 1063162 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt.ef6470c2 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt
	I0729 18:17:58.107441 1063162 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key.ef6470c2 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key
	I0729 18:17:58.107489 1063162 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key
	I0729 18:17:58.107507 1063162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt with IP's: []
	I0729 18:17:58.272506 1063162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt ...
	I0729 18:17:58.272538 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt: {Name:mk9d9c105be286c2d9ecb17af8e01253b559066a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.272695 1063162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key ...
	I0729 18:17:58.272708 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key: {Name:mk8b87660164a26c8b76ad308286e5e101f93be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:17:58.272870 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:17:58.272915 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:17:58.272940 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:17:58.272975 1063162 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:17:58.273628 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:17:58.297868 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:17:58.321500 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:17:58.344382 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:17:58.368279 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 18:17:58.401675 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:17:58.429119 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:17:58.452162 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:17:58.474534 1063162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:17:58.496480 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:17:58.512199 1063162 ssh_runner.go:195] Run: openssl version
	I0729 18:17:58.517611 1063162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:17:58.527977 1063162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.532366 1063162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.532432 1063162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:17:58.538133 1063162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:17:58.548825 1063162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:17:58.552803 1063162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:17:58.552859 1063162 kubeadm.go:392] StartCluster: {Name:addons-685520 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-685520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:58.552957 1063162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:17:58.553007 1063162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:17:58.588846 1063162 cri.go:89] found id: ""
	I0729 18:17:58.588931 1063162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:17:58.598900 1063162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:17:58.610086 1063162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:17:58.624600 1063162 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:17:58.624621 1063162 kubeadm.go:157] found existing configuration files:
	
	I0729 18:17:58.624671 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:17:58.634416 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:17:58.634477 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:17:58.643426 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:17:58.652016 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:17:58.652069 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:17:58.660901 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:17:58.669343 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:17:58.669386 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:17:58.678312 1063162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:17:58.686734 1063162 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:17:58.686781 1063162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:17:58.695954 1063162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:17:58.884970 1063162 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:18:09.044876 1063162 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:18:09.044930 1063162 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:18:09.045012 1063162 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:18:09.045147 1063162 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:18:09.045289 1063162 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:18:09.045352 1063162 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:18:09.046558 1063162 out.go:204]   - Generating certificates and keys ...
	I0729 18:18:09.046649 1063162 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:18:09.046724 1063162 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:18:09.046809 1063162 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:18:09.046898 1063162 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:18:09.046997 1063162 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:18:09.047042 1063162 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:18:09.047100 1063162 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:18:09.047260 1063162 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685520 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0729 18:18:09.047315 1063162 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:18:09.047470 1063162 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685520 localhost] and IPs [192.168.39.137 127.0.0.1 ::1]
	I0729 18:18:09.047568 1063162 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:18:09.047659 1063162 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:18:09.047701 1063162 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:18:09.047775 1063162 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:18:09.047845 1063162 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:18:09.047924 1063162 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:18:09.047998 1063162 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:18:09.048085 1063162 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:18:09.048167 1063162 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:18:09.048278 1063162 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:18:09.048347 1063162 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:18:09.049417 1063162 out.go:204]   - Booting up control plane ...
	I0729 18:18:09.049503 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:18:09.049577 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:18:09.049634 1063162 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:18:09.049730 1063162 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:18:09.049816 1063162 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:18:09.049872 1063162 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:18:09.050010 1063162 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:18:09.050073 1063162 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:18:09.050124 1063162 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.02152ms
	I0729 18:18:09.050188 1063162 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:18:09.050251 1063162 kubeadm.go:310] [api-check] The API server is healthy after 5.001159759s
	I0729 18:18:09.050340 1063162 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:18:09.050454 1063162 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:18:09.050517 1063162 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:18:09.050687 1063162 kubeadm.go:310] [mark-control-plane] Marking the node addons-685520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:18:09.050770 1063162 kubeadm.go:310] [bootstrap-token] Using token: h69d3p.copq4a8ve97e77q5
	I0729 18:18:09.052052 1063162 out.go:204]   - Configuring RBAC rules ...
	I0729 18:18:09.052158 1063162 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:18:09.052260 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:18:09.052460 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:18:09.052572 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:18:09.052665 1063162 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:18:09.052766 1063162 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:18:09.052907 1063162 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:18:09.052964 1063162 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:18:09.053014 1063162 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:18:09.053023 1063162 kubeadm.go:310] 
	I0729 18:18:09.053087 1063162 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:18:09.053097 1063162 kubeadm.go:310] 
	I0729 18:18:09.053197 1063162 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:18:09.053207 1063162 kubeadm.go:310] 
	I0729 18:18:09.053258 1063162 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:18:09.053342 1063162 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:18:09.053416 1063162 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:18:09.053425 1063162 kubeadm.go:310] 
	I0729 18:18:09.053502 1063162 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:18:09.053510 1063162 kubeadm.go:310] 
	I0729 18:18:09.053585 1063162 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:18:09.053594 1063162 kubeadm.go:310] 
	I0729 18:18:09.053679 1063162 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:18:09.053766 1063162 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:18:09.053869 1063162 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:18:09.053881 1063162 kubeadm.go:310] 
	I0729 18:18:09.053986 1063162 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:18:09.054090 1063162 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:18:09.054100 1063162 kubeadm.go:310] 
	I0729 18:18:09.054213 1063162 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token h69d3p.copq4a8ve97e77q5 \
	I0729 18:18:09.054362 1063162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 18:18:09.054383 1063162 kubeadm.go:310] 	--control-plane 
	I0729 18:18:09.054387 1063162 kubeadm.go:310] 
	I0729 18:18:09.054467 1063162 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:18:09.054474 1063162 kubeadm.go:310] 
	I0729 18:18:09.054571 1063162 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token h69d3p.copq4a8ve97e77q5 \
	I0729 18:18:09.054686 1063162 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 18:18:09.054697 1063162 cni.go:84] Creating CNI manager for ""
	I0729 18:18:09.054706 1063162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:18:09.055974 1063162 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 18:18:09.056889 1063162 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 18:18:09.067139 1063162 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 18:18:09.086094 1063162 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:18:09.086206 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:09.086258 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685520 minikube.k8s.io/updated_at=2024_07_29T18_18_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=addons-685520 minikube.k8s.io/primary=true
	I0729 18:18:09.122511 1063162 ops.go:34] apiserver oom_adj: -16
	I0729 18:18:09.212623 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:09.712958 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:10.212763 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:10.713244 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:11.213676 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:11.713400 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:12.212862 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:12.713085 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:13.212875 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:13.712988 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:14.213640 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:14.713071 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:15.212659 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:15.712896 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:16.213372 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:16.713525 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:17.212719 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:17.712844 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:18.212963 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:18.713669 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:19.213140 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:19.713429 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:20.213564 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:20.712733 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:21.212707 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:21.713452 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:22.213588 1063162 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:18:22.330351 1063162 kubeadm.go:1113] duration metric: took 13.244198827s to wait for elevateKubeSystemPrivileges
	I0729 18:18:22.330401 1063162 kubeadm.go:394] duration metric: took 23.777548413s to StartCluster
	I0729 18:18:22.330430 1063162 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:18:22.330642 1063162 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:18:22.331096 1063162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:18:22.331351 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:18:22.331377 1063162 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 18:18:22.331352 1063162 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.137 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:18:22.331498 1063162 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685520"
	I0729 18:18:22.331503 1063162 addons.go:69] Setting storage-provisioner=true in profile "addons-685520"
	I0729 18:18:22.331530 1063162 addons.go:234] Setting addon storage-provisioner=true in "addons-685520"
	I0729 18:18:22.331548 1063162 addons.go:69] Setting inspektor-gadget=true in profile "addons-685520"
	I0729 18:18:22.331565 1063162 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685520"
	I0729 18:18:22.331580 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:18:22.331597 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331603 1063162 addons.go:69] Setting metrics-server=true in profile "addons-685520"
	I0729 18:18:22.331602 1063162 addons.go:69] Setting volcano=true in profile "addons-685520"
	I0729 18:18:22.331583 1063162 addons.go:234] Setting addon inspektor-gadget=true in "addons-685520"
	I0729 18:18:22.331612 1063162 addons.go:69] Setting volumesnapshots=true in profile "addons-685520"
	I0729 18:18:22.331629 1063162 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685520"
	I0729 18:18:22.331580 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331650 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331677 1063162 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685520"
	I0729 18:18:22.331680 1063162 addons.go:234] Setting addon volumesnapshots=true in "addons-685520"
	I0729 18:18:22.331714 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331722 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331491 1063162 addons.go:69] Setting yakd=true in profile "addons-685520"
	I0729 18:18:22.331747 1063162 addons.go:69] Setting helm-tiller=true in profile "addons-685520"
	I0729 18:18:22.331765 1063162 addons.go:234] Setting addon yakd=true in "addons-685520"
	I0729 18:18:22.331785 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331805 1063162 addons.go:234] Setting addon helm-tiller=true in "addons-685520"
	I0729 18:18:22.331832 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.331484 1063162 addons.go:69] Setting ingress-dns=true in profile "addons-685520"
	I0729 18:18:22.331942 1063162 addons.go:234] Setting addon ingress-dns=true in "addons-685520"
	I0729 18:18:22.331986 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.332107 1063162 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685520"
	I0729 18:18:22.332130 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332108 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332144 1063162 addons.go:69] Setting default-storageclass=true in profile "addons-685520"
	I0729 18:18:22.332133 1063162 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685520"
	I0729 18:18:22.332165 1063162 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685520"
	I0729 18:18:22.332224 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332225 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332253 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332374 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332404 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.331788 1063162 addons.go:234] Setting addon volcano=true in "addons-685520"
	I0729 18:18:22.332453 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332455 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332464 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.332472 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332479 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332494 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332512 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.332793 1063162 addons.go:69] Setting ingress=true in profile "addons-685520"
	I0729 18:18:22.332835 1063162 addons.go:234] Setting addon ingress=true in "addons-685520"
	I0729 18:18:22.331525 1063162 addons.go:69] Setting registry=true in profile "addons-685520"
	I0729 18:18:22.332865 1063162 addons.go:234] Setting addon registry=true in "addons-685520"
	I0729 18:18:22.331621 1063162 addons.go:234] Setting addon metrics-server=true in "addons-685520"
	I0729 18:18:22.332136 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.332920 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.331492 1063162 addons.go:69] Setting cloud-spanner=true in profile "addons-685520"
	I0729 18:18:22.332985 1063162 addons.go:234] Setting addon cloud-spanner=true in "addons-685520"
	I0729 18:18:22.333013 1063162 addons.go:69] Setting gcp-auth=true in profile "addons-685520"
	I0729 18:18:22.333033 1063162 mustload.go:65] Loading cluster: addons-685520
	I0729 18:18:22.333145 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333184 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333191 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333225 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333252 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.333496 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.333606 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.333650 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.333870 1063162 config.go:182] Loaded profile config "addons-685520": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:18:22.333996 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.334038 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334217 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.334253 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334286 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.334339 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.334890 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.335249 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.335269 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.339619 1063162 out.go:177] * Verifying Kubernetes components...
	I0729 18:18:22.343326 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.350963 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.352580 1063162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:18:22.356686 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44083
	I0729 18:18:22.356951 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39909
	I0729 18:18:22.357497 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.357563 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.358115 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.358134 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.358135 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.358154 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.358552 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.358718 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.359297 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.359328 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.360071 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42703
	I0729 18:18:22.360594 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.360616 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.360823 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.361323 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.361351 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.361726 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.362312 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.362341 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.364969 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42611
	I0729 18:18:22.368775 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35517
	I0729 18:18:22.369851 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38853
	I0729 18:18:22.371510 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.371556 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.371728 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.371825 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372116 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372200 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0729 18:18:22.372452 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.372473 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.372617 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.372630 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.372693 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.372879 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.372932 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.373041 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.373303 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.373322 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.373400 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.373432 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46591
	I0729 18:18:22.373661 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.373930 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.373948 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.374429 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.374473 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.374771 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.375430 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.375464 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.381625 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0729 18:18:22.381767 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.382789 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35113
	I0729 18:18:22.383403 1063162 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685520"
	I0729 18:18:22.383452 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.383657 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.383677 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.383811 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.383840 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.384275 1063162 addons.go:234] Setting addon default-storageclass=true in "addons-685520"
	I0729 18:18:22.384321 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.384675 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.384704 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.385396 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.385480 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.385536 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.386413 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.386431 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.386545 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.386555 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.386937 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.386952 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.386979 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.387508 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.387549 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.387734 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.388011 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.388841 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39443
	I0729 18:18:22.389415 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.390059 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.390081 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.390584 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.391194 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.391230 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.395220 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.397103 1063162 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 18:18:22.398116 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 18:18:22.398134 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 18:18:22.398156 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.401717 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.402084 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.402107 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.402382 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.402593 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.402766 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.402963 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.409105 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
	I0729 18:18:22.409699 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.410306 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.410331 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.410783 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.411064 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33337
	I0729 18:18:22.411199 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.412206 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.412868 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.412888 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.412954 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.413719 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.415278 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0729 18:18:22.415738 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.416307 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 18:18:22.416309 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.416453 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.416922 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.417164 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.417236 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33191
	I0729 18:18:22.417800 1063162 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 18:18:22.417819 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 18:18:22.417840 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.418143 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.418203 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.418724 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.418779 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.419286 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.419891 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.419928 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.420268 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.421244 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.421715 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.421749 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.421770 1063162 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 18:18:22.421955 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.422698 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.422943 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.423020 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 18:18:22.423038 1063162 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 18:18:22.423060 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.423145 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.423248 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.423870 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:22.424990 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:22.425986 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.426989 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.427017 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.427239 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.427477 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.427498 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0729 18:18:22.427709 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.427742 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0729 18:18:22.428021 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.428092 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 18:18:22.428391 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.428693 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.428711 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.429401 1063162 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 18:18:22.429419 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 18:18:22.429435 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.430072 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.430273 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0729 18:18:22.430465 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.430749 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.430840 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.431566 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.431592 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.431684 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.431700 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.432091 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.432095 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.432325 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.432367 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.433899 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.434445 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.434481 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.434732 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.435025 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.435036 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.435218 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.435421 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34347
	I0729 18:18:22.435676 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.435771 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.436068 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.436088 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.436564 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.436590 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.436878 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0729 18:18:22.436986 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.437282 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.437353 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:18:22.437366 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 18:18:22.437418 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 18:18:22.437857 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.437917 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.438335 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.438343 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
	I0729 18:18:22.438818 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.438898 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.438899 1063162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:18:22.438963 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:18:22.438977 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.438977 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.439372 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.439397 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.439624 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.439664 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.439688 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.439833 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.439945 1063162 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 18:18:22.439982 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 18:18:22.441054 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 18:18:22.441061 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 18:18:22.441076 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 18:18:22.441094 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.441919 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.443237 1063162 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 18:18:22.443322 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 18:18:22.443390 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.443920 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.443944 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.444183 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.444497 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.444516 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.444704 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.444950 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.444986 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.445009 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.445304 1063162 out.go:177]   - Using image docker.io/busybox:stable
	I0729 18:18:22.445407 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.445569 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.445609 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 18:18:22.445676 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.445765 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.446760 1063162 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 18:18:22.446779 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 18:18:22.446798 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.448446 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 18:18:22.449574 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 18:18:22.450169 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.450571 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.450615 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.450779 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.451002 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.451182 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.451304 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.451499 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 18:18:22.452429 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I0729 18:18:22.452435 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 18:18:22.452450 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 18:18:22.452469 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.453379 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.453966 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.453987 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.454358 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.454997 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.455034 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.456426 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.456649 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46715
	I0729 18:18:22.456800 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.456819 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.457016 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.457088 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.457297 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.457481 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.457659 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.458728 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.458751 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.459012 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
	I0729 18:18:22.459163 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.459678 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.460280 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.460321 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.460840 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0729 18:18:22.461183 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.461200 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.461307 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.461731 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.461749 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.461857 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I0729 18:18:22.462167 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.462376 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.462449 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.462726 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.463047 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.463071 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.463518 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.463560 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.463700 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.464522 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.464630 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.464669 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.464794 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I0729 18:18:22.465333 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.465482 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0729 18:18:22.465858 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.466021 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.466038 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.466408 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.466455 1063162 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 18:18:22.466633 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.466655 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.466755 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.467036 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.467188 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.467590 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 18:18:22.467611 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 18:18:22.467630 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.468994 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45531
	I0729 18:18:22.469189 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:22.469558 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:22.469601 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:22.469834 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.470391 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.471038 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.471063 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.471296 1063162 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 18:18:22.471411 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.471593 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.471834 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.471853 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.471890 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.472027 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.472204 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.472353 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 18:18:22.472368 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.472372 1063162 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 18:18:22.472391 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.472535 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.474451 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.475956 1063162 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 18:18:22.476159 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.476732 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.476760 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.476989 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.477157 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.477183 1063162 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 18:18:22.477198 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 18:18:22.477215 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.477275 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.477367 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.480533 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.480979 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.481007 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.481393 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.481806 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.482008 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.482172 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.482606 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45459
	I0729 18:18:22.483062 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.483555 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.483579 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.483979 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.484183 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.486057 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.487411 1063162 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 18:18:22.488571 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46115
	I0729 18:18:22.488591 1063162 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 18:18:22.488607 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 18:18:22.488625 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.489275 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.489286 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37443
	I0729 18:18:22.489736 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.489766 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.490163 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.490369 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.491376 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.492033 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.492130 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.492295 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:22.492307 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:22.492473 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:22.492485 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:22.492493 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:22.492501 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:22.493999 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:22.494005 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.494016 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:22.494021 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.494032 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 18:18:22.494109 1063162 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 18:18:22.494175 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.494322 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.494441 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.494460 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.494514 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.494688 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.494791 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.494881 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0729 18:18:22.495024 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.495369 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.495908 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.495925 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.496286 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.496492 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.497742 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.498124 1063162 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:18:22.498144 1063162 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:18:22.498163 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.500766 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.501169 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.501230 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.501390 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.501561 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.501735 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.501893 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.518836 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38661
	I0729 18:18:22.519317 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:22.519836 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:22.519861 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:22.520244 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:22.520476 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:22.522243 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:22.523934 1063162 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 18:18:22.525061 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 18:18:22.525084 1063162 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 18:18:22.525104 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:22.528071 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.528558 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:22.528593 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:22.528690 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:22.528862 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:22.529156 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:22.529309 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:22.944375 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 18:18:22.963063 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 18:18:22.963093 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 18:18:22.981106 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 18:18:22.981126 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 18:18:22.990559 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 18:18:23.066003 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 18:18:23.066029 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 18:18:23.075865 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:18:23.095979 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 18:18:23.113734 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 18:18:23.113759 1063162 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 18:18:23.113766 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 18:18:23.113786 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 18:18:23.118367 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 18:18:23.118386 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 18:18:23.119623 1063162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:18:23.119705 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:18:23.150916 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 18:18:23.150939 1063162 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 18:18:23.155300 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 18:18:23.155317 1063162 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 18:18:23.195270 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 18:18:23.200443 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 18:18:23.206235 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 18:18:23.206251 1063162 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 18:18:23.224064 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:18:23.270943 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 18:18:23.270976 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 18:18:23.296250 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 18:18:23.296277 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 18:18:23.317006 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 18:18:23.317029 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 18:18:23.330777 1063162 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:18:23.330798 1063162 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 18:18:23.359688 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 18:18:23.359709 1063162 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 18:18:23.381594 1063162 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 18:18:23.381615 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 18:18:23.398315 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 18:18:23.444635 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 18:18:23.444671 1063162 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 18:18:23.495974 1063162 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 18:18:23.496003 1063162 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 18:18:23.594683 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 18:18:23.603558 1063162 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 18:18:23.603590 1063162 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 18:18:23.613263 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 18:18:23.613288 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 18:18:23.639271 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 18:18:23.639295 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 18:18:23.761845 1063162 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 18:18:23.761869 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 18:18:23.778256 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 18:18:23.778286 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 18:18:23.818019 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 18:18:23.860333 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 18:18:23.860375 1063162 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 18:18:23.892807 1063162 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 18:18:23.892836 1063162 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 18:18:23.971529 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 18:18:23.971557 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 18:18:24.049351 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 18:18:24.049381 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 18:18:24.086003 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 18:18:24.086031 1063162 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 18:18:24.090441 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 18:18:24.126601 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 18:18:24.126627 1063162 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 18:18:24.236682 1063162 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:24.236706 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 18:18:24.402644 1063162 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 18:18:24.402672 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 18:18:24.410730 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 18:18:24.410754 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 18:18:24.567737 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:24.671744 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 18:18:24.671771 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 18:18:24.758100 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 18:18:24.878770 1063162 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 18:18:24.878805 1063162 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 18:18:25.137378 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 18:18:25.466020 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.521599565s)
	I0729 18:18:25.466089 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466103 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466127 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.475543607s)
	I0729 18:18:25.466152 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466170 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466188 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.390300928s)
	I0729 18:18:25.466212 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466223 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466538 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466578 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.466592 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466602 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466614 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466664 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466695 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.466700 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.466715 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466746 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.466763 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.466973 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.467012 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.467073 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.467093 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.467078 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.467140 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.466583 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.468075 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.468099 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.468122 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.468822 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.468841 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.468854 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:25.489651 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:25.489677 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:25.489962 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:25.489991 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:25.489978 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.692958 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.596941662s)
	I0729 18:18:27.693012 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.693027 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.693046 1063162 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.573391477s)
	I0729 18:18:27.693112 1063162 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.573381774s)
	I0729 18:18:27.693189 1063162 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 18:18:27.693315 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.693359 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.693367 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.693380 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.693387 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.694261 1063162 node_ready.go:35] waiting up to 6m0s for node "addons-685520" to be "Ready" ...
	I0729 18:18:27.694407 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.694426 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.694451 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.715696 1063162 node_ready.go:49] node "addons-685520" has status "Ready":"True"
	I0729 18:18:27.715724 1063162 node_ready.go:38] duration metric: took 21.432494ms for node "addons-685520" to be "Ready" ...
	I0729 18:18:27.715736 1063162 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:18:27.762161 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:27.762184 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:27.762601 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:27.762651 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:27.762665 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:27.770000 1063162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace to be "Ready" ...
	I0729 18:18:28.226346 1063162 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685520" context rescaled to 1 replicas
	I0729 18:18:29.571799 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 18:18:29.571868 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:29.575097 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.575609 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:29.575642 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.575809 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:29.576019 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:29.576221 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:29.576390 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:29.794883 1063162 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 18:18:29.850291 1063162 addons.go:234] Setting addon gcp-auth=true in "addons-685520"
	I0729 18:18:29.850366 1063162 host.go:66] Checking if "addons-685520" exists ...
	I0729 18:18:29.850717 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:29.850758 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:29.866878 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0729 18:18:29.867367 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:29.867975 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:29.868008 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:29.868418 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:29.869023 1063162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:18:29.869051 1063162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:18:29.884975 1063162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0729 18:18:29.885399 1063162 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:18:29.885864 1063162 main.go:141] libmachine: Using API Version  1
	I0729 18:18:29.885881 1063162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:18:29.886288 1063162 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:18:29.886493 1063162 main.go:141] libmachine: (addons-685520) Calling .GetState
	I0729 18:18:29.888360 1063162 main.go:141] libmachine: (addons-685520) Calling .DriverName
	I0729 18:18:29.888598 1063162 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 18:18:29.888618 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHHostname
	I0729 18:18:29.891425 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.891888 1063162 main.go:141] libmachine: (addons-685520) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:98:d7", ip: ""} in network mk-addons-685520: {Iface:virbr1 ExpiryTime:2024-07-29 19:17:41 +0000 UTC Type:0 Mac:52:54:00:5a:98:d7 Iaid: IPaddr:192.168.39.137 Prefix:24 Hostname:addons-685520 Clientid:01:52:54:00:5a:98:d7}
	I0729 18:18:29.891917 1063162 main.go:141] libmachine: (addons-685520) DBG | domain addons-685520 has defined IP address 192.168.39.137 and MAC address 52:54:00:5a:98:d7 in network mk-addons-685520
	I0729 18:18:29.892031 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHPort
	I0729 18:18:29.892189 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHKeyPath
	I0729 18:18:29.892300 1063162 main.go:141] libmachine: (addons-685520) Calling .GetSSHUsername
	I0729 18:18:29.892422 1063162 sshutil.go:53] new ssh client: &{IP:192.168.39.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/addons-685520/id_rsa Username:docker}
	I0729 18:18:29.901878 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:30.425683 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.23037748s)
	I0729 18:18:30.425742 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425753 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425765 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.225294136s)
	I0729 18:18:30.425803 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425817 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425874 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.201790205s)
	I0729 18:18:30.425899 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425908 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.425938 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.027587455s)
	I0729 18:18:30.425960 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.425975 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426061 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.831340202s)
	I0729 18:18:30.426083 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426093 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426105 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.426134 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426141 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426149 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426156 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426159 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.608115824s)
	I0729 18:18:30.426060 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426176 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426185 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426184 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.426187 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426189 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.426203 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426211 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.426222 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426212 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426231 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426254 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.335791379s)
	I0729 18:18:30.426269 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426277 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.426409 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.858630332s)
	W0729 18:18:30.426451 1063162 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 18:18:30.426479 1063162 retry.go:31] will retry after 372.084847ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 18:18:30.426563 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.66843055s)
	I0729 18:18:30.426585 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.426594 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.430989 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.430996 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431020 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431019 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431029 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431039 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431046 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431047 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431068 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431070 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431087 1063162 addons.go:475] Verifying addon ingress=true in "addons-685520"
	I0729 18:18:30.431100 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431106 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431111 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431115 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431123 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431134 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431143 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431150 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431150 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431188 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431173 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431202 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431209 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431215 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431221 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431192 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431028 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431245 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431255 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431231 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431271 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431284 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:30.431292 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:30.431501 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431516 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431491 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431538 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431568 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431576 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431578 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431589 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.431599 1063162 addons.go:475] Verifying addon metrics-server=true in "addons-685520"
	I0729 18:18:30.431599 1063162 addons.go:475] Verifying addon registry=true in "addons-685520"
	I0729 18:18:30.431651 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:30.431902 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.431929 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.433016 1063162 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685520 service yakd-dashboard -n yakd-dashboard
	
	I0729 18:18:30.433028 1063162 out.go:177] * Verifying registry addon...
	I0729 18:18:30.433088 1063162 out.go:177] * Verifying ingress addon...
	I0729 18:18:30.431683 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:30.433142 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:30.434765 1063162 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 18:18:30.434833 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 18:18:30.473814 1063162 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 18:18:30.473838 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:30.479830 1063162 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 18:18:30.479863 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:30.798763 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 18:18:30.941792 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:30.941870 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:31.481787 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:31.482256 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:31.544680 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.407225434s)
	I0729 18:18:31.544708 1063162 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.656085901s)
	I0729 18:18:31.544750 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:31.544765 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:31.545088 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:31.545109 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:31.545123 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:31.545135 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:31.545149 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:31.545387 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:31.545404 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:31.545414 1063162 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685520"
	I0729 18:18:31.545389 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:31.546272 1063162 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 18:18:31.546988 1063162 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 18:18:31.548140 1063162 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 18:18:31.548907 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 18:18:31.549091 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 18:18:31.549105 1063162 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 18:18:31.588944 1063162 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 18:18:31.588968 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:31.667966 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 18:18:31.667990 1063162 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 18:18:31.747307 1063162 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 18:18:31.747329 1063162 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 18:18:31.824265 1063162 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 18:18:31.941309 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:31.945324 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.056231 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:32.285629 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:32.440690 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.441659 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:32.555831 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:32.869259 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.070436089s)
	I0729 18:18:32.869315 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:32.869330 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:32.869793 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:32.869813 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:32.869830 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:32.869847 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:32.869858 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:32.870153 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:32.870216 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:32.870242 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:32.945050 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:32.945684 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:33.054807 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:33.378077 1063162 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.553773854s)
	I0729 18:18:33.378144 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:33.378162 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:33.378436 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:33.378474 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:33.378481 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:33.378488 1063162 main.go:141] libmachine: Making call to close driver server
	I0729 18:18:33.378498 1063162 main.go:141] libmachine: (addons-685520) Calling .Close
	I0729 18:18:33.378723 1063162 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:18:33.378735 1063162 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:18:33.378762 1063162 main.go:141] libmachine: (addons-685520) DBG | Closing plugin on server side
	I0729 18:18:33.380845 1063162 addons.go:475] Verifying addon gcp-auth=true in "addons-685520"
	I0729 18:18:33.382093 1063162 out.go:177] * Verifying gcp-auth addon...
	I0729 18:18:33.383822 1063162 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 18:18:33.406186 1063162 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 18:18:33.406205 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:33.447822 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:33.452533 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:33.585415 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:33.888376 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:33.939942 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:33.940234 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:34.072640 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:34.388208 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:34.446422 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:34.449929 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:34.555081 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:34.777043 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:34.889448 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:34.944666 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:34.947877 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.057628 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:35.276074 1063162 pod_ready.go:97] pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.137 HostIPs:[{IP:192.168.39
.137}] PodIP: PodIPs:[] StartTime:2024-07-29 18:18:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 18:18:23 +0000 UTC,FinishedAt:2024-07-29 18:18:33 +0000 UTC,ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923 Started:0xc0021801c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 18:18:35.276114 1063162 pod_ready.go:81] duration metric: took 7.506080732s for pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace to be "Ready" ...
	E0729 18:18:35.276131 1063162 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-75qr8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 18:18:21 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.137 HostIPs:[{IP:192.168.39.137}] PodIP: PodIPs:[] StartTime:2024-07-29 18:18:21 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-07-29 18:18:23 +0000 UTC,FinishedAt:2024-07-29 18:18:33 +0000 UTC,ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://fe394149a1657f103574c3162756ef412b0fa90ae7d63d6d0c80b12ac296e923 Started:0xc0021801c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 18:18:35.276142 1063162 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace to be "Ready" ...
	I0729 18:18:35.387735 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:35.440649 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:35.442153 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.555031 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:35.888331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:35.939619 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:35.941246 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.054564 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:36.387668 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:36.440783 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.440869 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:36.554489 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:36.887590 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:36.940813 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:36.943550 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:37.054187 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:37.282460 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:37.387780 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:37.652350 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:37.652441 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:37.654774 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:37.887889 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:37.940548 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:37.942298 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.055160 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:38.388086 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:38.440372 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:38.440599 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.554825 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:38.889004 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:38.942092 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:38.942272 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:39.054825 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:39.282981 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:39.386817 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:39.446437 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:39.447685 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:39.554890 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:39.887723 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:39.940508 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:39.941874 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:40.054425 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:40.388065 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:40.441762 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:40.448420 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:40.567120 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:40.887070 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:40.942718 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:40.949257 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:41.054220 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:41.387440 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:41.439804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:41.440125 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:41.554289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:41.782015 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:41.886961 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:41.941540 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:41.941887 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.057266 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:42.664989 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.665388 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:42.665563 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:42.667438 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:42.888017 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:42.940423 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:42.942180 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:43.054559 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:43.388423 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:43.439585 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:43.440050 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:43.554868 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:43.782303 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:43.887661 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:43.941109 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:43.943928 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:44.055192 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:44.388092 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:44.440884 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:44.441025 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:44.554629 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:44.887386 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:44.940001 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:44.941512 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.054356 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:45.387832 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:45.439790 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:45.440150 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.554657 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:45.963746 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:45.965252 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:45.972406 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:45.973237 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:46.054973 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:46.388044 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:46.439750 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:46.441004 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:46.557509 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:46.888220 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:46.941202 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:46.941376 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:47.057107 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:47.387693 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:47.440965 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:47.442072 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:47.557125 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:47.887589 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:47.940838 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:47.941570 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:48.054447 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:48.283469 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:48.387608 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:48.441061 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:48.441804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:48.555173 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:48.887230 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:48.939240 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:48.940716 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.054976 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:49.388304 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:49.442644 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.442821 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:49.555185 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:49.887788 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:49.941386 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:49.941913 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.054641 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:50.388513 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:50.441616 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:50.441693 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.554450 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:50.782157 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:50.887896 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:50.940144 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:50.941384 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.055331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:51.390537 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:51.456887 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:51.457380 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.554289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:51.888020 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:51.941718 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:51.942102 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:52.055000 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:52.387247 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:52.442199 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:52.444451 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:52.553888 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:52.887966 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:52.942545 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:52.942767 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.054365 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:53.656646 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:53.659090 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.660684 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:53.662712 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:53.663897 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:53.887479 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:53.941838 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:53.942315 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.062331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:54.387656 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:54.446690 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.447311 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:54.554311 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:54.892053 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:54.940708 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:54.946804 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:55.053827 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:55.388093 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:55.438400 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:55.441171 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:55.554994 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:55.781885 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:55.921933 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:55.941088 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:55.943298 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.055534 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:56.388931 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:56.440192 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.441007 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:56.555396 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:56.887781 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:56.940252 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:56.941685 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:57.054411 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:57.387676 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:57.441291 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:57.441447 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:57.555104 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.035985 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.036677 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.037201 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:58.038234 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:18:58.053160 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.387424 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.440987 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:58.441029 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.557087 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:58.886830 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:58.939748 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:58.940003 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.054866 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:59.387323 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:59.438833 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:18:59.440095 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.555228 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:18:59.889105 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:18:59.943980 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:18:59.945096 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:00.054526 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:00.282904 1063162 pod_ready.go:102] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:00.388284 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:00.439314 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:00.442329 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:00.554114 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:00.892796 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:00.940600 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:00.940636 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:01.053539 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:01.387165 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:01.440458 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:01.440566 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:01.554387 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.016328 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.016358 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:02.017818 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.054377 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.281178 1063162 pod_ready.go:92] pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.281201 1063162 pod_ready.go:81] duration metric: took 27.005051176s for pod "coredns-7db6d8ff4d-zrfkz" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.281211 1063162 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.285247 1063162 pod_ready.go:92] pod "etcd-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.285264 1063162 pod_ready.go:81] duration metric: took 4.047747ms for pod "etcd-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.285273 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.289340 1063162 pod_ready.go:92] pod "kube-apiserver-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.289355 1063162 pod_ready.go:81] duration metric: took 4.076496ms for pod "kube-apiserver-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.289362 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.293640 1063162 pod_ready.go:92] pod "kube-controller-manager-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.293653 1063162 pod_ready.go:81] duration metric: took 4.285177ms for pod "kube-controller-manager-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.293662 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bnslr" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.297070 1063162 pod_ready.go:92] pod "kube-proxy-bnslr" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.297082 1063162 pod_ready.go:81] duration metric: took 3.415233ms for pod "kube-proxy-bnslr" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.297088 1063162 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.387067 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.438720 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:02.439990 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.553866 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:02.679836 1063162 pod_ready.go:92] pod "kube-scheduler-addons-685520" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:02.679859 1063162 pod_ready.go:81] duration metric: took 382.764517ms for pod "kube-scheduler-addons-685520" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.679868 1063162 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:02.898928 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:02.951105 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:02.951549 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:03.054043 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:03.387546 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:03.439204 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:03.439720 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:03.554334 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:03.887221 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:03.940442 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:03.940801 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:04.053805 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:04.387545 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:04.440456 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:04.440813 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:04.554525 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:04.686029 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:04.887181 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:04.941196 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:04.941612 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:05.054202 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:05.388721 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:05.441037 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 18:19:05.441538 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:05.554996 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:05.887901 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:05.941775 1063162 kapi.go:107] duration metric: took 35.506938989s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 18:19:05.943046 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:06.054069 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:06.387559 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:06.439228 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:06.554781 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:06.687322 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:06.887950 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:06.950969 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:07.054223 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:07.387306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:07.561092 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:07.562224 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:07.887809 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:07.938961 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:08.054398 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:08.387390 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:08.439509 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:08.554380 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:08.887244 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:08.938745 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:09.388792 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:09.391383 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:09.392780 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:09.439904 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:09.554018 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:09.887306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:09.939310 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:10.055790 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:10.388435 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:10.439811 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:10.554655 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:10.887506 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:10.939038 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:11.054749 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:11.387349 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:11.438834 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:11.555053 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:11.685797 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:11.891182 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:11.940568 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:12.055875 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:12.387787 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:12.442204 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:12.556026 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:12.887815 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:12.939800 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:13.054158 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:13.387913 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:13.439631 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:13.554666 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:13.687186 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:13.888677 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:13.938960 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:14.056286 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:14.388008 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:14.439735 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:14.555083 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:14.887904 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:14.939910 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:15.064703 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:15.387659 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:15.447046 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:15.561675 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:15.887021 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:15.939193 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:16.053957 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:16.185600 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:16.388027 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:16.439800 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:16.555210 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:16.886994 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:16.940065 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:17.053475 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:17.387849 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:17.438728 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:17.554826 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:17.889045 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:17.939519 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:18.054548 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:18.547102 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:18.551175 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:18.553503 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:18.556009 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:18.887339 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:18.940563 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:19.063735 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:19.387289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:19.450671 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:19.555719 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:19.888392 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:19.940047 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:20.054589 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:20.387308 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:20.442668 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:20.555341 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:20.685162 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:20.887241 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:20.938815 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:21.059680 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:21.386983 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:21.439564 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:21.555108 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:21.887325 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:21.938705 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:22.054704 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:22.387861 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:22.439524 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:22.557464 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:22.687127 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:22.887067 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:22.940135 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:23.054045 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:23.387908 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:23.441177 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:23.554783 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:23.887666 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:23.939430 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:24.054185 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:24.387149 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:24.442100 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:24.554549 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:24.887445 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:24.939617 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:25.057306 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:25.187311 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:25.387293 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:25.438460 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:25.555709 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:25.888849 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:25.939278 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:26.054706 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:26.387929 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:26.439326 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:26.555025 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 18:19:26.887797 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:26.939251 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:27.055884 1063162 kapi.go:107] duration metric: took 55.506971084s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 18:19:27.387935 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:27.439640 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:27.688009 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:27.887318 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:27.938920 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:28.387968 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:28.439424 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:28.888139 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:28.938248 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:29.386946 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:29.439503 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:29.887876 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:29.939403 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:30.186416 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:30.387331 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:30.438756 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:30.888362 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:30.938958 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:31.387900 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:31.439792 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:31.887862 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:31.939407 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:32.387675 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:32.439038 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:32.685657 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:32.887749 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:32.939262 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:33.387817 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:33.438970 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:33.888007 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:33.939512 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:34.387641 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:34.439265 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:34.686864 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:34.887841 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:34.940783 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:35.388805 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:35.440644 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:35.888572 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:35.940416 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:36.394173 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:36.440668 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:36.888755 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:36.939727 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:37.186576 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:37.388483 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:37.439463 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.358600 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.359024 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.387546 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.439020 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:38.887289 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:38.939278 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:39.387770 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:39.440494 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:39.688774 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:39.887811 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:39.940057 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:40.388592 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:40.439676 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:40.887130 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:40.940565 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:41.391445 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:41.439617 1063162 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 18:19:41.887528 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:41.940550 1063162 kapi.go:107] duration metric: took 1m11.505776888s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 18:19:42.186458 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:42.387429 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:42.889238 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:43.388329 1063162 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 18:19:43.891131 1063162 kapi.go:107] duration metric: took 1m10.507305023s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 18:19:43.892454 1063162 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685520 cluster.
	I0729 18:19:43.893796 1063162 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 18:19:43.895367 1063162 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 18:19:43.896948 1063162 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, ingress-dns, inspektor-gadget, metrics-server, helm-tiller, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 18:19:43.898042 1063162 addons.go:510] duration metric: took 1m21.566659903s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner ingress-dns inspektor-gadget metrics-server helm-tiller yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 18:19:44.686139 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:47.186932 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:49.685648 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:51.686038 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:54.187333 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:56.687987 1063162 pod_ready.go:102] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"False"
	I0729 18:19:58.185576 1063162 pod_ready.go:92] pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:58.185608 1063162 pod_ready.go:81] duration metric: took 55.50573426s for pod "metrics-server-c59844bb4-qt4qg" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.185619 1063162 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.190007 1063162 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace has status "Ready":"True"
	I0729 18:19:58.190024 1063162 pod_ready.go:81] duration metric: took 4.398682ms for pod "nvidia-device-plugin-daemonset-4bzd5" in "kube-system" namespace to be "Ready" ...
	I0729 18:19:58.190039 1063162 pod_ready.go:38] duration metric: took 1m30.474290108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:19:58.190070 1063162 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:19:58.190100 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:19:58.190149 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:19:58.243586 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:19:58.243616 1063162 cri.go:89] found id: ""
	I0729 18:19:58.243628 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:19:58.243696 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.250303 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:19:58.250369 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:19:58.288964 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:19:58.288992 1063162 cri.go:89] found id: ""
	I0729 18:19:58.289001 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:19:58.289051 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.293025 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:19:58.293094 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:19:58.333426 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:19:58.333464 1063162 cri.go:89] found id: ""
	I0729 18:19:58.333474 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:19:58.333542 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.337610 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:19:58.337691 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:19:58.376345 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:19:58.376385 1063162 cri.go:89] found id: ""
	I0729 18:19:58.376396 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:19:58.376462 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.380677 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:19:58.380735 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:19:58.424109 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:19:58.424134 1063162 cri.go:89] found id: ""
	I0729 18:19:58.424142 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:19:58.424195 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.428267 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:19:58.428339 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:19:58.464573 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:19:58.464592 1063162 cri.go:89] found id: ""
	I0729 18:19:58.464603 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:19:58.464666 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:19:58.469665 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:19:58.469731 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:19:58.527618 1063162 cri.go:89] found id: ""
	I0729 18:19:58.527654 1063162 logs.go:276] 0 containers: []
	W0729 18:19:58.527667 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:19:58.527680 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:19:58.527703 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:19:58.717570 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:19:58.717600 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:19:58.761822 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:19:58.761855 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:19:58.825529 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:19:58.825567 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:19:58.878700 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.878882 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.879025 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.879181 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.880067 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:19:58.880220 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:19:58.908031 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:19:58.908063 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:19:58.922757 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:19:58.922784 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:19:58.981605 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:19:58.981639 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:19:59.055315 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:19:59.055348 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:19:59.110816 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:19:59.110858 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:19:59.151952 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:19:59.151982 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:00.063422 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:00.063494 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:00.123881 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:00.123920 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:00.124000 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:00.124018 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124035 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124047 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124055 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:00.124064 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:00.124072 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:00.124083 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:10.124592 1063162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:20:10.158696 1063162 api_server.go:72] duration metric: took 1m47.82716557s to wait for apiserver process to appear ...
	I0729 18:20:10.158731 1063162 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:20:10.158774 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:20:10.158834 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:20:10.233393 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:10.233422 1063162 cri.go:89] found id: ""
	I0729 18:20:10.233433 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:20:10.233502 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.238607 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:20:10.238679 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:20:10.311518 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:10.311541 1063162 cri.go:89] found id: ""
	I0729 18:20:10.311553 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:20:10.311610 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.317247 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:20:10.317307 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:20:10.416836 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:10.416868 1063162 cri.go:89] found id: ""
	I0729 18:20:10.416878 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:20:10.416952 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.425550 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:20:10.425624 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:20:10.490746 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:10.490768 1063162 cri.go:89] found id: ""
	I0729 18:20:10.490777 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:20:10.490840 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.497973 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:20:10.498036 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:20:10.551296 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:10.551318 1063162 cri.go:89] found id: ""
	I0729 18:20:10.551326 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:20:10.551381 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.562105 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:20:10.562170 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:20:10.600175 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:10.600198 1063162 cri.go:89] found id: ""
	I0729 18:20:10.600207 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:20:10.600261 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:10.604568 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:20:10.604646 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:20:10.645461 1063162 cri.go:89] found id: ""
	I0729 18:20:10.645494 1063162 logs.go:276] 0 containers: []
	W0729 18:20:10.645506 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:20:10.645518 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:20:10.645532 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:10.687274 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:20:10.687304 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:10.724258 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:20:10.724288 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:10.782014 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:10.782053 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:10.834133 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:20:10.834168 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:20:10.892131 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892318 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892478 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.892657 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.893536 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:10.893703 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:10.921132 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:20:10.921157 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:20:11.059580 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:20:11.059610 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:11.175756 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:20:11.175791 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:11.828336 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:20:11.828445 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:20:11.844887 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:20:11.844922 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:11.887472 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:20:11.887505 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:11.928316 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:11.928344 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:11.928401 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:11.928412 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928419 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928434 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928447 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:11.928460 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:11.928471 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:11.928480 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:21.929595 1063162 api_server.go:253] Checking apiserver healthz at https://192.168.39.137:8443/healthz ...
	I0729 18:20:21.935957 1063162 api_server.go:279] https://192.168.39.137:8443/healthz returned 200:
	ok
	I0729 18:20:21.938368 1063162 api_server.go:141] control plane version: v1.30.3
	I0729 18:20:21.938388 1063162 api_server.go:131] duration metric: took 11.779651063s to wait for apiserver health ...
	I0729 18:20:21.938397 1063162 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:20:21.938427 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 18:20:21.938482 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 18:20:21.999694 1063162 cri.go:89] found id: "2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:21.999721 1063162 cri.go:89] found id: ""
	I0729 18:20:21.999732 1063162 logs.go:276] 1 containers: [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828]
	I0729 18:20:21.999803 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.004054 1063162 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 18:20:22.004104 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 18:20:22.042177 1063162 cri.go:89] found id: "793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:22.042206 1063162 cri.go:89] found id: ""
	I0729 18:20:22.042217 1063162 logs.go:276] 1 containers: [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36]
	I0729 18:20:22.042275 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.046502 1063162 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 18:20:22.046578 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 18:20:22.084443 1063162 cri.go:89] found id: "0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:22.084471 1063162 cri.go:89] found id: ""
	I0729 18:20:22.084480 1063162 logs.go:276] 1 containers: [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa]
	I0729 18:20:22.084543 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.088882 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 18:20:22.088962 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 18:20:22.126414 1063162 cri.go:89] found id: "49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:22.126436 1063162 cri.go:89] found id: ""
	I0729 18:20:22.126447 1063162 logs.go:276] 1 containers: [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30]
	I0729 18:20:22.126512 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.131166 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 18:20:22.131245 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 18:20:22.181997 1063162 cri.go:89] found id: "56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:22.182019 1063162 cri.go:89] found id: ""
	I0729 18:20:22.182027 1063162 logs.go:276] 1 containers: [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540]
	I0729 18:20:22.182080 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.186268 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 18:20:22.186322 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 18:20:22.229450 1063162 cri.go:89] found id: "b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:22.229471 1063162 cri.go:89] found id: ""
	I0729 18:20:22.229480 1063162 logs.go:276] 1 containers: [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732]
	I0729 18:20:22.229532 1063162 ssh_runner.go:195] Run: which crictl
	I0729 18:20:22.233827 1063162 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 18:20:22.233891 1063162 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 18:20:22.274011 1063162 cri.go:89] found id: ""
	I0729 18:20:22.274040 1063162 logs.go:276] 0 containers: []
	W0729 18:20:22.274048 1063162 logs.go:278] No container was found matching "kindnet"
	I0729 18:20:22.274058 1063162 logs.go:123] Gathering logs for describe nodes ...
	I0729 18:20:22.274072 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 18:20:22.394236 1063162 logs.go:123] Gathering logs for etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] ...
	I0729 18:20:22.394269 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36"
	I0729 18:20:22.452095 1063162 logs.go:123] Gathering logs for CRI-O ...
	I0729 18:20:22.452136 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 18:20:23.327908 1063162 logs.go:123] Gathering logs for container status ...
	I0729 18:20:23.327956 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 18:20:23.384187 1063162 logs.go:123] Gathering logs for kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] ...
	I0729 18:20:23.384220 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732"
	I0729 18:20:23.445115 1063162 logs.go:123] Gathering logs for kubelet ...
	I0729 18:20:23.445157 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 18:20:23.496324 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577757    1275 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496498 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496637 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.496787 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.497642 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.497792 1063162 logs.go:138] Found kubelet problem: Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:23.526388 1063162 logs.go:123] Gathering logs for dmesg ...
	I0729 18:20:23.526417 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 18:20:23.542197 1063162 logs.go:123] Gathering logs for kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] ...
	I0729 18:20:23.542233 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828"
	I0729 18:20:23.588900 1063162 logs.go:123] Gathering logs for coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] ...
	I0729 18:20:23.588932 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa"
	I0729 18:20:23.627768 1063162 logs.go:123] Gathering logs for kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] ...
	I0729 18:20:23.627802 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30"
	I0729 18:20:23.669642 1063162 logs.go:123] Gathering logs for kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] ...
	I0729 18:20:23.669678 1063162 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540"
	I0729 18:20:23.706702 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:23.706731 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 18:20:23.706797 1063162 out.go:239] X Problems detected in kubelet:
	W0729 18:20:23.706811 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577812    1275 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706825 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: W0729 18:18:27.577861    1275 reflector.go:547] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706834 1063162 out.go:239]   Jul 29 18:18:27 addons-685520 kubelet[1275]: E0729 18:18:27.577873    1275 reflector.go:150] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706842 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: W0729 18:18:28.391954    1275 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	W0729 18:20:23.706883 1063162 out.go:239]   Jul 29 18:18:28 addons-685520 kubelet[1275]: E0729 18:18:28.391984    1275 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-685520" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-685520' and this object
	I0729 18:20:23.706891 1063162 out.go:304] Setting ErrFile to fd 2...
	I0729 18:20:23.706902 1063162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:20:33.718719 1063162 system_pods.go:59] 18 kube-system pods found
	I0729 18:20:33.718753 1063162 system_pods.go:61] "coredns-7db6d8ff4d-zrfkz" [8f1412dd-5eec-49c8-88ea-9725e2ecc017] Running
	I0729 18:20:33.718758 1063162 system_pods.go:61] "csi-hostpath-attacher-0" [2c53773b-3b70-4b61-a9fa-242a1091f327] Running
	I0729 18:20:33.718762 1063162 system_pods.go:61] "csi-hostpath-resizer-0" [612cf202-3d1a-4859-ad72-0b5bfc16aec6] Running
	I0729 18:20:33.718765 1063162 system_pods.go:61] "csi-hostpathplugin-sfz6c" [19694c0f-aaad-4ada-be53-34f11202d797] Running
	I0729 18:20:33.718769 1063162 system_pods.go:61] "etcd-addons-685520" [2ad20938-ce5a-499d-a013-72d8b49e61fb] Running
	I0729 18:20:33.718772 1063162 system_pods.go:61] "kube-apiserver-addons-685520" [3559744b-f0ab-4459-a201-ce4e37003789] Running
	I0729 18:20:33.718775 1063162 system_pods.go:61] "kube-controller-manager-addons-685520" [66f49f54-d749-452d-8f01-675f6f16e53c] Running
	I0729 18:20:33.718778 1063162 system_pods.go:61] "kube-ingress-dns-minikube" [a22a2df5-68df-492e-8478-b1fa2ed6d45a] Running
	I0729 18:20:33.718781 1063162 system_pods.go:61] "kube-proxy-bnslr" [dea08c83-eebf-47be-ba32-65ae4fd51a9b] Running
	I0729 18:20:33.718784 1063162 system_pods.go:61] "kube-scheduler-addons-685520" [d88158ca-7d50-455d-aa7b-9fc2ae7883d0] Running
	I0729 18:20:33.718789 1063162 system_pods.go:61] "metrics-server-c59844bb4-qt4qg" [46b5fee1-ed94-4adc-a131-a0d90438dbaf] Running
	I0729 18:20:33.718794 1063162 system_pods.go:61] "nvidia-device-plugin-daemonset-4bzd5" [0edbc902-4717-462e-8c98-1e0af3da0c72] Running
	I0729 18:20:33.718798 1063162 system_pods.go:61] "registry-698f998955-grn4f" [ae9be054-2ae9-4bb2-91af-3a601d969805] Running
	I0729 18:20:33.718803 1063162 system_pods.go:61] "registry-proxy-sxvm2" [07822b9d-56b6-4aab-bce3-512310b7497f] Running
	I0729 18:20:33.718807 1063162 system_pods.go:61] "snapshot-controller-745499f584-4x8xg" [5218abd3-b463-4a1f-9f77-df15193cea8f] Running
	I0729 18:20:33.718811 1063162 system_pods.go:61] "snapshot-controller-745499f584-8wwkm" [da494473-4096-4488-ace0-8361335052a0] Running
	I0729 18:20:33.718821 1063162 system_pods.go:61] "storage-provisioner" [6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea] Running
	I0729 18:20:33.718829 1063162 system_pods.go:61] "tiller-deploy-6677d64bcd-nl6s4" [018ede57-0c16-4231-aab9-8a15f104da71] Running
	I0729 18:20:33.718838 1063162 system_pods.go:74] duration metric: took 11.780431776s to wait for pod list to return data ...
	I0729 18:20:33.718860 1063162 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:20:33.720733 1063162 default_sa.go:45] found service account: "default"
	I0729 18:20:33.720750 1063162 default_sa.go:55] duration metric: took 1.882401ms for default service account to be created ...
	I0729 18:20:33.720757 1063162 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:20:33.728981 1063162 system_pods.go:86] 18 kube-system pods found
	I0729 18:20:33.729003 1063162 system_pods.go:89] "coredns-7db6d8ff4d-zrfkz" [8f1412dd-5eec-49c8-88ea-9725e2ecc017] Running
	I0729 18:20:33.729008 1063162 system_pods.go:89] "csi-hostpath-attacher-0" [2c53773b-3b70-4b61-a9fa-242a1091f327] Running
	I0729 18:20:33.729014 1063162 system_pods.go:89] "csi-hostpath-resizer-0" [612cf202-3d1a-4859-ad72-0b5bfc16aec6] Running
	I0729 18:20:33.729019 1063162 system_pods.go:89] "csi-hostpathplugin-sfz6c" [19694c0f-aaad-4ada-be53-34f11202d797] Running
	I0729 18:20:33.729023 1063162 system_pods.go:89] "etcd-addons-685520" [2ad20938-ce5a-499d-a013-72d8b49e61fb] Running
	I0729 18:20:33.729027 1063162 system_pods.go:89] "kube-apiserver-addons-685520" [3559744b-f0ab-4459-a201-ce4e37003789] Running
	I0729 18:20:33.729031 1063162 system_pods.go:89] "kube-controller-manager-addons-685520" [66f49f54-d749-452d-8f01-675f6f16e53c] Running
	I0729 18:20:33.729035 1063162 system_pods.go:89] "kube-ingress-dns-minikube" [a22a2df5-68df-492e-8478-b1fa2ed6d45a] Running
	I0729 18:20:33.729039 1063162 system_pods.go:89] "kube-proxy-bnslr" [dea08c83-eebf-47be-ba32-65ae4fd51a9b] Running
	I0729 18:20:33.729044 1063162 system_pods.go:89] "kube-scheduler-addons-685520" [d88158ca-7d50-455d-aa7b-9fc2ae7883d0] Running
	I0729 18:20:33.729047 1063162 system_pods.go:89] "metrics-server-c59844bb4-qt4qg" [46b5fee1-ed94-4adc-a131-a0d90438dbaf] Running
	I0729 18:20:33.729052 1063162 system_pods.go:89] "nvidia-device-plugin-daemonset-4bzd5" [0edbc902-4717-462e-8c98-1e0af3da0c72] Running
	I0729 18:20:33.729055 1063162 system_pods.go:89] "registry-698f998955-grn4f" [ae9be054-2ae9-4bb2-91af-3a601d969805] Running
	I0729 18:20:33.729059 1063162 system_pods.go:89] "registry-proxy-sxvm2" [07822b9d-56b6-4aab-bce3-512310b7497f] Running
	I0729 18:20:33.729063 1063162 system_pods.go:89] "snapshot-controller-745499f584-4x8xg" [5218abd3-b463-4a1f-9f77-df15193cea8f] Running
	I0729 18:20:33.729068 1063162 system_pods.go:89] "snapshot-controller-745499f584-8wwkm" [da494473-4096-4488-ace0-8361335052a0] Running
	I0729 18:20:33.729071 1063162 system_pods.go:89] "storage-provisioner" [6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea] Running
	I0729 18:20:33.729077 1063162 system_pods.go:89] "tiller-deploy-6677d64bcd-nl6s4" [018ede57-0c16-4231-aab9-8a15f104da71] Running
	I0729 18:20:33.729082 1063162 system_pods.go:126] duration metric: took 8.320881ms to wait for k8s-apps to be running ...
	I0729 18:20:33.729090 1063162 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:20:33.729136 1063162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:20:33.745304 1063162 system_svc.go:56] duration metric: took 16.208296ms WaitForService to wait for kubelet
	I0729 18:20:33.745332 1063162 kubeadm.go:582] duration metric: took 2m11.413807992s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:20:33.745360 1063162 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:20:33.748290 1063162 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:20:33.748329 1063162 node_conditions.go:123] node cpu capacity is 2
	I0729 18:20:33.748357 1063162 node_conditions.go:105] duration metric: took 2.9898ms to run NodePressure ...
	I0729 18:20:33.748373 1063162 start.go:241] waiting for startup goroutines ...
	I0729 18:20:33.748397 1063162 start.go:246] waiting for cluster config update ...
	I0729 18:20:33.748424 1063162 start.go:255] writing updated cluster config ...
	I0729 18:20:33.748791 1063162 ssh_runner.go:195] Run: rm -f paused
	I0729 18:20:33.798446 1063162 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:20:33.801078 1063162 out.go:177] * Done! kubectl is now configured to use "addons-685520" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.788412102Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0531d593-1302-405e-8cf3-707903a1f3e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.788728426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0531d593-1302-405e-8cf3-707903a1f3e8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.822339190Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=183b6230-5c9d-4204-817c-f9d4b1ebf262 name=/runtime.v1.RuntimeService/Status
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.822423302Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=183b6230-5c9d-4204-817c-f9d4b1ebf262 name=/runtime.v1.RuntimeService/Status
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.825782494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e383b27c-053e-4ee6-aaec-41573ac8cde8 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.825851617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e383b27c-053e-4ee6-aaec-41573ac8cde8 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.827174078Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=019181a2-7cf2-4ade-9db0-ed043eb28e11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.828733459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277623828708628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=019181a2-7cf2-4ade-9db0-ed043eb28e11 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.829190292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2bd0935-8097-4bae-aa1a-2123da1a5864 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.829267719Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2bd0935-8097-4bae-aa1a-2123da1a5864 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.829591352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2bd0935-8097-4bae-aa1a-2123da1a5864 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.868583680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60d08d54-d50f-4f77-a843-63f539a00406 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.868656291Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60d08d54-d50f-4f77-a843-63f539a00406 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.873257459Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a4d6df4-c94a-416f-9bd7-ad85cb05aa71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.880151500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277623880124384,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a4d6df4-c94a-416f-9bd7-ad85cb05aa71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.880975165Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2df9de9-47e2-4cf8-87e4-747fa584902f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.881159995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2df9de9-47e2-4cf8-87e4-747fa584902f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.881551818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2df9de9-47e2-4cf8-87e4-747fa584902f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.913523808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dc54955-f45d-4cab-bcc8-6c8d1cf1c887 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.913608457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dc54955-f45d-4cab-bcc8-6c8d1cf1c887 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.917672528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48adc847-b6b9-4e1a-baf2-c8f6647ef353 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.919389955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722277623919366899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589581,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48adc847-b6b9-4e1a-baf2-c8f6647ef353 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.919865796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccf3c3e2-84f6-40f3-b24c-9712e3e85c18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.919913598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccf3c3e2-84f6-40f3-b24c-9712e3e85c18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:27:03 addons-685520 crio[681]: time="2024-07-29 18:27:03.920149982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5a8934f2fd17eb7aeed1eebd0393380a0b3a46e187fd4a5d8178f954b53a59e8,PodSandboxId:e5b59743c51ca97221b9e1237ec95c9b571a53c76a932348ad860480d564cbce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722277446510671601,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-tp7mw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f668e519-eafa-4af0-91c7-bc71c008c159,},Annotations:map[string]string{io.kubernetes.container.hash: 171ce219,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4970f8a9ad04f54400c017cb45f6fdf4136f1ef1f0cc1419a0bf5845ae97e53,PodSandboxId:73e5373798b82861828370f0aae8dcb06688d30284269259979ce39f37a55941,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722277306277574828,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2cde6bfb-dbfb-436d-b105-79bd0f65c822,},Annotations:map[string]string{io.kubernet
es.container.hash: 9940d6d8,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55f968b97ba4fd5509ae0ac2af93bad127a82d537cef315f0f72527ad6afc60e,PodSandboxId:ff677a118669926b7e64fabd470753497627d2abc882e76f4a0972a88da1804a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722277235494938408,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d29216ef-0904-4580-b
03b-d6f4c55f78b7,},Annotations:map[string]string{io.kubernetes.container.hash: 95b36df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a4c8627baffaff64062f07a71f2b908d397d3c7b74ddfc6fa7037b306112f2,PodSandboxId:58e6a7d9552550996310f09299427f4d5c890743b4ce8eba3d52c71584347b38,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722277133754035144,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-qt4qg,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 46b5fee1-ed94-4adc-a131-a0d90438dbaf,},Annotations:map[string]string{io.kubernetes.container.hash: dd7246b5,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477,PodSandboxId:b9c555c1f6c67d5677fb34bc9ad478705570a75c1444ca7435a1c92cf40f78e5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722277107744781570,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b5d2240-cf56-4fdd-b28f-4c1ca6f5c6ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3819d528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540,PodSandboxId:5600374c0d1453df209a935b7ed098e7b08ddf5a7baa11eecaa0db3e8558e086,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722277103144210889,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-p
roxy-bnslr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dea08c83-eebf-47be-ba32-65ae4fd51a9b,},Annotations:map[string]string{io.kubernetes.container.hash: 9b304237,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa,PodSandboxId:126407fabe6d2bfc6cd7a2510d065002abad2282d494f921ba67a3588e510287,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722277102823901528,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-zrfkz,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 8f1412dd-5eec-49c8-88ea-9725e2ecc017,},Annotations:map[string]string{io.kubernetes.container.hash: 9411a72c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36,PodSandboxId:97f9cc8513240b3d26c6bf2c62bc27f9693592dd308269d937cbf76bf5b54a8e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a7
5a899,State:CONTAINER_RUNNING,CreatedAt:1722277083103421677,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67a9280b99b456ca990083164e350b9e,},Annotations:map[string]string{io.kubernetes.container.hash: a60ee63c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30,PodSandboxId:93f73f83097ee38087613c5f687d7b4055a6c520a469348c1365288aa26b1465,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Created
At:1722277083174249369,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0288ad41c0ee8d9ea0ff1636b97bd48,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732,PodSandboxId:086ec296b7437ab007d75393a0b6aac607ddb5bf34e2992ad95a782683406f5e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17222
77083092451627,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de74c0a20b3468eeb23ba96e48abfd5,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828,PodSandboxId:83777ac8e6d35d378209adefd0b7ee4677d83e5f9f3c83510afce6969bc93f91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
277083040704692,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-685520,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b8a94201da8594e6707eda1d6d8252,},Annotations:map[string]string{io.kubernetes.container.hash: 923d5c6f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccf3c3e2-84f6-40f3-b24c-9712e3e85c18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a8934f2fd17e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   e5b59743c51ca       hello-world-app-6778b5fc9f-tp7mw
	e4970f8a9ad04       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         5 minutes ago       Running             nginx                     0                   73e5373798b82       nginx
	55f968b97ba4f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   ff677a1186699       busybox
	a8a4c8627baff       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago       Running             metrics-server            0                   58e6a7d955255       metrics-server-c59844bb4-qt4qg
	e8128f8da9097       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   b9c555c1f6c67       storage-provisioner
	56985357a76b9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   5600374c0d145       kube-proxy-bnslr
	0159416a2ffac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   126407fabe6d2       coredns-7db6d8ff4d-zrfkz
	49bf3e5a91fe3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        9 minutes ago       Running             kube-scheduler            0                   93f73f83097ee       kube-scheduler-addons-685520
	793fd521a6ea1       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   97f9cc8513240       etcd-addons-685520
	b87f1d7ad226d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        9 minutes ago       Running             kube-controller-manager   0                   086ec296b7437       kube-controller-manager-addons-685520
	2bdbc0aba106d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        9 minutes ago       Running             kube-apiserver            0                   83777ac8e6d35       kube-apiserver-addons-685520
	
	
	==> coredns [0159416a2ffac2dd9631cfc5f2b67fa1f6485c8ec1207fc9cf2cce2639054ffa] <==
	[INFO] 10.244.0.7:39281 - 7376 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000079172s
	[INFO] 10.244.0.7:57008 - 62999 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000125036s
	[INFO] 10.244.0.7:57008 - 31509 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015527s
	[INFO] 10.244.0.7:36784 - 53223 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088208s
	[INFO] 10.244.0.7:36784 - 5609 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094027s
	[INFO] 10.244.0.7:56731 - 3156 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000347373s
	[INFO] 10.244.0.7:56731 - 64853 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000258199s
	[INFO] 10.244.0.7:51827 - 65463 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000083192s
	[INFO] 10.244.0.7:51827 - 24746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108735s
	[INFO] 10.244.0.7:56101 - 602 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091257s
	[INFO] 10.244.0.7:56101 - 9300 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116538s
	[INFO] 10.244.0.7:48396 - 22087 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053048s
	[INFO] 10.244.0.7:48396 - 47174 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000142858s
	[INFO] 10.244.0.7:52967 - 28914 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000055137s
	[INFO] 10.244.0.7:52967 - 59632 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124939s
	[INFO] 10.244.0.22:40405 - 57447 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000393952s
	[INFO] 10.244.0.22:37578 - 19919 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00014584s
	[INFO] 10.244.0.22:35083 - 37305 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128154s
	[INFO] 10.244.0.22:60865 - 40006 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201672s
	[INFO] 10.244.0.22:42794 - 5524 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111693s
	[INFO] 10.244.0.22:48192 - 9971 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013231s
	[INFO] 10.244.0.22:43797 - 56541 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000789147s
	[INFO] 10.244.0.22:53124 - 34367 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000764487s
	[INFO] 10.244.0.27:51107 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000509229s
	[INFO] 10.244.0.27:45153 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113603s
	
	
	==> describe nodes <==
	Name:               addons-685520
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685520
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=addons-685520
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_18_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685520
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:18:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685520
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:26:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:24:17 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:24:17 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:24:17 +0000   Mon, 29 Jul 2024 18:18:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:24:17 +0000   Mon, 29 Jul 2024 18:18:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.137
	  Hostname:    addons-685520
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c4826ffa24b4f319f34facf10037875
	  System UUID:                7c4826ff-a24b-4f31-9f34-facf10037875
	  Boot ID:                    c1f46fab-e4b8-441b-80ae-779aec887efb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  default                     hello-world-app-6778b5fc9f-tp7mw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m59s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 coredns-7db6d8ff4d-zrfkz                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m43s
	  kube-system                 etcd-addons-685520                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         8m56s
	  kube-system                 kube-apiserver-addons-685520             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m56s
	  kube-system                 kube-controller-manager-addons-685520    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m56s
	  kube-system                 kube-proxy-bnslr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 kube-scheduler-addons-685520             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m56s
	  kube-system                 metrics-server-c59844bb4-qt4qg           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m40s  kube-proxy       
	  Normal  Starting                 8m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m56s  kubelet          Node addons-685520 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s  kubelet          Node addons-685520 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s  kubelet          Node addons-685520 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m55s  kubelet          Node addons-685520 status is now: NodeReady
	  Normal  RegisteredNode           8m43s  node-controller  Node addons-685520 event: Registered Node addons-685520 in Controller
	
	
	==> dmesg <==
	[  +0.306093] systemd-fstab-generator[1656]: Ignoring "noauto" option for root device
	[  +4.840312] kauditd_printk_skb: 109 callbacks suppressed
	[  +5.019022] kauditd_printk_skb: 142 callbacks suppressed
	[  +7.567498] kauditd_printk_skb: 73 callbacks suppressed
	[ +17.303906] kauditd_printk_skb: 11 callbacks suppressed
	[Jul29 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +7.712660] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.446440] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.032827] kauditd_printk_skb: 61 callbacks suppressed
	[  +6.078661] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.899398] kauditd_printk_skb: 3 callbacks suppressed
	[ +14.989518] kauditd_printk_skb: 52 callbacks suppressed
	[Jul29 18:20] kauditd_printk_skb: 24 callbacks suppressed
	[ +12.101362] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.873329] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.907354] kauditd_printk_skb: 66 callbacks suppressed
	[Jul29 18:21] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.369384] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.593467] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.080625] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.293047] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.421398] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.028511] kauditd_printk_skb: 66 callbacks suppressed
	[Jul29 18:24] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.318772] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [793fd521a6ea14fb86b4264bd92de2b14aaf7a97303a8d5b6772e91540985c36] <==
	{"level":"warn","ts":"2024-07-29T18:19:18.532396Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:18.133912Z","time spent":"398.459745ms","remote":"127.0.0.1:53408","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" mod_revision:973 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mnvtp4wy23smf56sqghgjopwaq\" > >"}
	{"level":"warn","ts":"2024-07-29T18:19:18.532459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.432197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-07-29T18:19:18.532479Z","caller":"traceutil/trace.go:171","msg":"trace[2025520249] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1032; }","duration":"155.47173ms","start":"2024-07-29T18:19:18.377002Z","end":"2024-07-29T18:19:18.532473Z","steps":["trace[2025520249] 'agreement among raft nodes before linearized reading'  (duration: 155.403444ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:18.532665Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.419687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14465"}
	{"level":"info","ts":"2024-07-29T18:19:18.532681Z","caller":"traceutil/trace.go:171","msg":"trace[1053654952] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1032; }","duration":"104.460347ms","start":"2024-07-29T18:19:18.428216Z","end":"2024-07-29T18:19:18.532676Z","steps":["trace[1053654952] 'agreement among raft nodes before linearized reading'  (duration: 104.402746ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.343748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"466.735214ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-07-29T18:19:38.343829Z","caller":"traceutil/trace.go:171","msg":"trace[59053337] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1133; }","duration":"466.876332ms","start":"2024-07-29T18:19:37.876934Z","end":"2024-07-29T18:19:38.34381Z","steps":["trace[59053337] 'range keys from in-memory index tree'  (duration: 466.629382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.343862Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:37.876921Z","time spent":"466.929983ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11476,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-29T18:19:38.344061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.565605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-07-29T18:19:38.344082Z","caller":"traceutil/trace.go:171","msg":"trace[1714875697] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1133; }","duration":"416.604745ms","start":"2024-07-29T18:19:37.927469Z","end":"2024-07-29T18:19:38.344074Z","steps":["trace[1714875697] 'range keys from in-memory index tree'  (duration: 416.475899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344097Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:19:37.927457Z","time spent":"416.636919ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-07-29T18:19:38.34421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"297.360619ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T18:19:38.344227Z","caller":"traceutil/trace.go:171","msg":"trace[1584107679] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1133; }","duration":"297.402774ms","start":"2024-07-29T18:19:38.046819Z","end":"2024-07-29T18:19:38.344221Z","steps":["trace[1584107679] 'count revisions from in-memory index tree'  (duration: 297.32246ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.630555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-c59844bb4-qt4qg\" ","response":"range_response_count:1 size:4458"}
	{"level":"info","ts":"2024-07-29T18:19:38.344511Z","caller":"traceutil/trace.go:171","msg":"trace[497087657] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-qt4qg; range_end:; response_count:1; response_revision:1133; }","duration":"172.69272ms","start":"2024-07-29T18:19:38.171812Z","end":"2024-07-29T18:19:38.344505Z","steps":["trace[497087657] 'range keys from in-memory index tree'  (duration: 172.559159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:19:38.344635Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.781215ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2024-07-29T18:19:38.344652Z","caller":"traceutil/trace.go:171","msg":"trace[409822624] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1133; }","duration":"161.816262ms","start":"2024-07-29T18:19:38.182829Z","end":"2024-07-29T18:19:38.344645Z","steps":["trace[409822624] 'range keys from in-memory index tree'  (duration: 161.714776ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:20:04.288606Z","caller":"traceutil/trace.go:171","msg":"trace[423974404] transaction","detail":"{read_only:false; response_revision:1234; number_of_response:1; }","duration":"118.893405ms","start":"2024-07-29T18:20:04.169688Z","end":"2024-07-29T18:20:04.288582Z","steps":["trace[423974404] 'process raft request'  (duration: 118.797098ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:21:10.513738Z","caller":"traceutil/trace.go:171","msg":"trace[2100858930] linearizableReadLoop","detail":"{readStateIndex:1603; appliedIndex:1602; }","duration":"354.987839ms","start":"2024-07-29T18:21:10.158726Z","end":"2024-07-29T18:21:10.513713Z","steps":["trace[2100858930] 'read index received'  (duration: 354.830313ms)","trace[2100858930] 'applied index is now lower than readState.Index'  (duration: 157.066µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T18:21:10.513827Z","caller":"traceutil/trace.go:171","msg":"trace[1666936031] transaction","detail":"{read_only:false; response_revision:1546; number_of_response:1; }","duration":"432.23568ms","start":"2024-07-29T18:21:10.081584Z","end":"2024-07-29T18:21:10.513819Z","steps":["trace[1666936031] 'process raft request'  (duration: 432.018871ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:21:10.513948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:21:10.08157Z","time spent":"432.276269ms","remote":"127.0.0.1:53310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1539 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-07-29T18:21:10.514155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.436333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8910"}
	{"level":"info","ts":"2024-07-29T18:21:10.514195Z","caller":"traceutil/trace.go:171","msg":"trace[42663486] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1546; }","duration":"355.503247ms","start":"2024-07-29T18:21:10.158683Z","end":"2024-07-29T18:21:10.514187Z","steps":["trace[42663486] 'agreement among raft nodes before linearized reading'  (duration: 355.335226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T18:21:10.514216Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:21:10.15867Z","time spent":"355.542216ms","remote":"127.0.0.1:53338","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":8933,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-07-29T18:21:14.210035Z","caller":"traceutil/trace.go:171","msg":"trace[75383448] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1593; }","duration":"134.447971ms","start":"2024-07-29T18:21:14.075572Z","end":"2024-07-29T18:21:14.21002Z","steps":["trace[75383448] 'process raft request'  (duration: 134.324863ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:27:04 up 9 min,  0 users,  load average: 0.08, 0.38, 0.31
	Linux addons-685520 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2bdbc0aba106d0a990794004e16fc961b45b6457011649bfa631942df4131828] <==
	E0729 18:19:57.757028       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.101.170:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.101.170:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.101.170:443: connect: connection refused
	I0729 18:19:57.822448       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0729 18:20:43.260846       1 conn.go:339] Error on socket receive: read tcp 192.168.39.137:8443->192.168.39.1:57106: use of closed network connection
	E0729 18:20:43.475955       1 conn.go:339] Error on socket receive: read tcp 192.168.39.137:8443->192.168.39.1:57132: use of closed network connection
	I0729 18:21:10.613935       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.148.30"}
	E0729 18:21:15.558203       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0729 18:21:17.136578       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0729 18:21:43.635216       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0729 18:21:43.649147       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.649188       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.713247       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.713358       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.717232       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.717825       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.745560       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.745648       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.766704       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0729 18:21:43.766751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0729 18:21:43.822024       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0729 18:21:43.873986       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.104.252"}
	W0729 18:21:44.718625       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0729 18:21:44.767289       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0729 18:21:44.790446       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0729 18:21:44.885518       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0729 18:24:05.370994       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.86.176"}
	
	
	==> kube-controller-manager [b87f1d7ad226dbe3077b779dd4782c91b43154f698936d97b2b7b66fe5e00732] <==
	W0729 18:24:48.771982       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:24:48.772086       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:24:54.333202       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:24:54.333367       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:25:04.822913       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:25:04.822988       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:25:30.575235       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:25:30.575555       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:25:35.319790       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:25:35.319863       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:25:48.568610       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:25:48.568847       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:04.251800       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:04.251894       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:05.530521       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:05.530566       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:20.422144       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:20.422235       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:35.039468       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:35.039557       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:53.052501       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:53.052607       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0729 18:26:57.245167       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0729 18:26:57.245198       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0729 18:27:02.940716       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="10.996µs"
	
	
	==> kube-proxy [56985357a76b9e9abb7aa60d08ffac9e1a728c47e6cbd48dfeef0b7068d90540] <==
	I0729 18:18:24.042740       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:18:24.057503       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.137"]
	I0729 18:18:24.139959       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:18:24.140009       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:18:24.140029       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:18:24.147563       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:18:24.147798       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:18:24.147811       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:18:24.151343       1 config.go:192] "Starting service config controller"
	I0729 18:18:24.151352       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:18:24.154417       1 config.go:319] "Starting node config controller"
	I0729 18:18:24.154473       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:18:24.155646       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:18:24.155671       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:18:24.155678       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:18:24.251566       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:18:24.254751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [49bf3e5a91fe303d566b5efd6ac023629a37b3b8219cfbde451cccfcb2606a30] <==
	W0729 18:18:05.776890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:18:05.777694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:18:05.776923       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 18:18:05.777791       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 18:18:05.776929       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:18:05.777889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 18:18:05.776943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:18:05.777937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:18:06.585850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:18:06.585901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:18:06.587841       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.587885       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:06.810793       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 18:18:06.810917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 18:18:06.836044       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 18:18:06.836097       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 18:18:06.870997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:18:06.871047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:18:06.907395       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.907442       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:06.979760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 18:18:06.979814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 18:18:07.026632       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:18:07.027353       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0729 18:18:07.368670       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.044203    1275 scope.go:117] "RemoveContainer" containerID="15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: E0729 18:24:11.044649    1275 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": container with ID starting with 15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e not found: ID does not exist" containerID="15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.044674    1275 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e"} err="failed to get container status \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": rpc error: code = NotFound desc = could not find container \"15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e\": container with ID starting with 15f56c52c48ac8551543a22231035e790558a246beaed433283bea5f08700b6e not found: ID does not exist"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.722920    1275 scope.go:117] "RemoveContainer" containerID="21b0fa7df727282e3ec85149e1cae6bee4d43497d9cb289863a35bf0a37df5b3"
	Jul 29 18:24:11 addons-685520 kubelet[1275]: I0729 18:24:11.743592    1275 scope.go:117] "RemoveContainer" containerID="e525f0c011f56e33f137d81ef557697b5f948b96b56aed44c33c6575d844ac70"
	Jul 29 18:24:12 addons-685520 kubelet[1275]: I0729 18:24:12.346871    1275 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad08fc5-b102-45b7-8f82-4cd1aaf999bb" path="/var/lib/kubelet/pods/fad08fc5-b102-45b7-8f82-4cd1aaf999bb/volumes"
	Jul 29 18:24:15 addons-685520 kubelet[1275]: I0729 18:24:15.342170    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 18:25:08 addons-685520 kubelet[1275]: E0729 18:25:08.360171    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:25:08 addons-685520 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:25:08 addons-685520 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:25:08 addons-685520 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:25:08 addons-685520 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:25:42 addons-685520 kubelet[1275]: I0729 18:25:42.342413    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 18:26:08 addons-685520 kubelet[1275]: E0729 18:26:08.359771    1275 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:26:08 addons-685520 kubelet[1275]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:26:08 addons-685520 kubelet[1275]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:26:08 addons-685520 kubelet[1275]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:26:08 addons-685520 kubelet[1275]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:26:56 addons-685520 kubelet[1275]: I0729 18:26:56.341627    1275 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.392665    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-828nx\" (UniqueName: \"kubernetes.io/projected/46b5fee1-ed94-4adc-a131-a0d90438dbaf-kube-api-access-828nx\") pod \"46b5fee1-ed94-4adc-a131-a0d90438dbaf\" (UID: \"46b5fee1-ed94-4adc-a131-a0d90438dbaf\") "
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.392712    1275 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/46b5fee1-ed94-4adc-a131-a0d90438dbaf-tmp-dir\") pod \"46b5fee1-ed94-4adc-a131-a0d90438dbaf\" (UID: \"46b5fee1-ed94-4adc-a131-a0d90438dbaf\") "
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.393063    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46b5fee1-ed94-4adc-a131-a0d90438dbaf-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "46b5fee1-ed94-4adc-a131-a0d90438dbaf" (UID: "46b5fee1-ed94-4adc-a131-a0d90438dbaf"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.397127    1275 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b5fee1-ed94-4adc-a131-a0d90438dbaf-kube-api-access-828nx" (OuterVolumeSpecName: "kube-api-access-828nx") pod "46b5fee1-ed94-4adc-a131-a0d90438dbaf" (UID: "46b5fee1-ed94-4adc-a131-a0d90438dbaf"). InnerVolumeSpecName "kube-api-access-828nx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.493108    1275 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-828nx\" (UniqueName: \"kubernetes.io/projected/46b5fee1-ed94-4adc-a131-a0d90438dbaf-kube-api-access-828nx\") on node \"addons-685520\" DevicePath \"\""
	Jul 29 18:27:04 addons-685520 kubelet[1275]: I0729 18:27:04.493130    1275 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/46b5fee1-ed94-4adc-a131-a0d90438dbaf-tmp-dir\") on node \"addons-685520\" DevicePath \"\""
	
	
	==> storage-provisioner [e8128f8da90970f1f83aa7828eaad9cf5165a283fc93854b8c4d0658039c5477] <==
	I0729 18:18:28.552454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 18:18:28.576116       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 18:18:28.576241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 18:18:28.593261       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 18:18:28.593732       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"017dbfb7-331e-4c81-9d3c-fe968cce6ad0", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8 became leader
	I0729 18:18:28.593766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8!
	I0729 18:18:28.696379       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685520_75fd24fa-48da-4857-b54d-b7c09c3a14d8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685520 -n addons-685520
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685520 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (350.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-685520
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-685520: exit status 82 (2m0.459302091s)

                                                
                                                
-- stdout --
	* Stopping node "addons-685520"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-685520" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685520
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-685520: exit status 11 (21.661579557s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-685520" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685520
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-685520: exit status 11 (6.144607171s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-685520" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-685520
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-685520: exit status 11 (6.142882699s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-685520" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh pgrep buildkitd: exit status 1 (236.678554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image build -t localhost/my-image:functional-728029 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 image build -t localhost/my-image:functional-728029 testdata/build --alsologtostderr: (5.003946571s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728029 image build -t localhost/my-image:functional-728029 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4c4e1a00e52
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-728029
--> 55e767711a8
Successfully tagged localhost/my-image:functional-728029
55e767711a86b40ff3198a9eeb5275fd16b520a9280ac92a560b954719e94d6c
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728029 image build -t localhost/my-image:functional-728029 testdata/build --alsologtostderr:
I0729 18:33:24.800871 1072904 out.go:291] Setting OutFile to fd 1 ...
I0729 18:33:24.801021 1072904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.801041 1072904 out.go:304] Setting ErrFile to fd 2...
I0729 18:33:24.801054 1072904 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.801322 1072904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
I0729 18:33:24.802380 1072904 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.803156 1072904 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.803717 1072904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.803771 1072904 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.819606 1072904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
I0729 18:33:24.820122 1072904 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.820732 1072904 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.820755 1072904 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.821232 1072904 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.821404 1072904 main.go:141] libmachine: (functional-728029) Calling .GetState
I0729 18:33:24.823181 1072904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.823224 1072904 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.839180 1072904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41631
I0729 18:33:24.839594 1072904 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.840148 1072904 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.840164 1072904 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.840648 1072904 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.840914 1072904 main.go:141] libmachine: (functional-728029) Calling .DriverName
I0729 18:33:24.841192 1072904 ssh_runner.go:195] Run: systemctl --version
I0729 18:33:24.841221 1072904 main.go:141] libmachine: (functional-728029) Calling .GetSSHHostname
I0729 18:33:24.843706 1072904 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.844124 1072904 main.go:141] libmachine: (functional-728029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:13:09", ip: ""} in network mk-functional-728029: {Iface:virbr1 ExpiryTime:2024-07-29 19:30:48 +0000 UTC Type:0 Mac:52:54:00:de:13:09 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-728029 Clientid:01:52:54:00:de:13:09}
I0729 18:33:24.844146 1072904 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined IP address 192.168.39.8 and MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.844259 1072904 main.go:141] libmachine: (functional-728029) Calling .GetSSHPort
I0729 18:33:24.844448 1072904 main.go:141] libmachine: (functional-728029) Calling .GetSSHKeyPath
I0729 18:33:24.844592 1072904 main.go:141] libmachine: (functional-728029) Calling .GetSSHUsername
I0729 18:33:24.844725 1072904 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/functional-728029/id_rsa Username:docker}
I0729 18:33:24.960961 1072904 build_images.go:161] Building image from path: /tmp/build.1486636657.tar
I0729 18:33:24.961022 1072904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 18:33:24.984425 1072904 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1486636657.tar
I0729 18:33:24.996782 1072904 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1486636657.tar: stat -c "%s %y" /var/lib/minikube/build/build.1486636657.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1486636657.tar': No such file or directory
I0729 18:33:24.996815 1072904 ssh_runner.go:362] scp /tmp/build.1486636657.tar --> /var/lib/minikube/build/build.1486636657.tar (3072 bytes)
I0729 18:33:25.063072 1072904 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1486636657
I0729 18:33:25.086067 1072904 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1486636657 -xf /var/lib/minikube/build/build.1486636657.tar
I0729 18:33:25.097053 1072904 crio.go:315] Building image: /var/lib/minikube/build/build.1486636657
I0729 18:33:25.097132 1072904 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-728029 /var/lib/minikube/build/build.1486636657 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 18:33:29.716568 1072904 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-728029 /var/lib/minikube/build/build.1486636657 --cgroup-manager=cgroupfs: (4.619403581s)
I0729 18:33:29.716670 1072904 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1486636657
I0729 18:33:29.727329 1072904 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1486636657.tar
I0729 18:33:29.749505 1072904 build_images.go:217] Built localhost/my-image:functional-728029 from /tmp/build.1486636657.tar
I0729 18:33:29.749553 1072904 build_images.go:133] succeeded building to: functional-728029
I0729 18:33:29.749561 1072904 build_images.go:134] failed building to: 
I0729 18:33:29.749616 1072904 main.go:141] libmachine: Making call to close driver server
I0729 18:33:29.749632 1072904 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:29.749929 1072904 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:29.749935 1072904 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:29.749953 1072904 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:29.749969 1072904 main.go:141] libmachine: Making call to close driver server
I0729 18:33:29.749978 1072904 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:29.750240 1072904 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:29.750255 1072904 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:29.750267 1072904 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 image ls: (2.220086257s)
functional_test.go:446: expected "localhost/my-image:functional-728029" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (7.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 node stop m02 -v=7 --alsologtostderr
E0729 18:39:22.889727 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:40:34.135344 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.473726463s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344156-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:38:43.323098 1077290 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:38:43.323386 1077290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:38:43.323398 1077290 out.go:304] Setting ErrFile to fd 2...
	I0729 18:38:43.323404 1077290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:38:43.323611 1077290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:38:43.323882 1077290 mustload.go:65] Loading cluster: ha-344156
	I0729 18:38:43.324278 1077290 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:38:43.324296 1077290 stop.go:39] StopHost: ha-344156-m02
	I0729 18:38:43.324677 1077290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:38:43.324734 1077290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:38:43.340637 1077290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0729 18:38:43.341130 1077290 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:38:43.341678 1077290 main.go:141] libmachine: Using API Version  1
	I0729 18:38:43.341714 1077290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:38:43.342101 1077290 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:38:43.344421 1077290 out.go:177] * Stopping node "ha-344156-m02"  ...
	I0729 18:38:43.345657 1077290 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:38:43.345682 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:38:43.345923 1077290 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:38:43.345952 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:38:43.348809 1077290 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:38:43.349266 1077290 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:38:43.349300 1077290 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:38:43.349518 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:38:43.349719 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:38:43.349908 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:38:43.350073 1077290 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:38:43.434574 1077290 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:38:43.488238 1077290 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:38:43.542972 1077290 main.go:141] libmachine: Stopping "ha-344156-m02"...
	I0729 18:38:43.543021 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:38:43.544495 1077290 main.go:141] libmachine: (ha-344156-m02) Calling .Stop
	I0729 18:38:43.548588 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 0/120
	I0729 18:38:44.549985 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 1/120
	I0729 18:38:45.551529 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 2/120
	I0729 18:38:46.553116 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 3/120
	I0729 18:38:47.554372 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 4/120
	I0729 18:38:48.556313 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 5/120
	I0729 18:38:49.557875 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 6/120
	I0729 18:38:50.559181 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 7/120
	I0729 18:38:51.561273 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 8/120
	I0729 18:38:52.563180 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 9/120
	I0729 18:38:53.565254 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 10/120
	I0729 18:38:54.566910 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 11/120
	I0729 18:38:55.569036 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 12/120
	I0729 18:38:56.571025 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 13/120
	I0729 18:38:57.573363 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 14/120
	I0729 18:38:58.575172 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 15/120
	I0729 18:38:59.576486 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 16/120
	I0729 18:39:00.577963 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 17/120
	I0729 18:39:01.579692 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 18/120
	I0729 18:39:02.581873 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 19/120
	I0729 18:39:03.584209 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 20/120
	I0729 18:39:04.586441 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 21/120
	I0729 18:39:05.588383 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 22/120
	I0729 18:39:06.590668 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 23/120
	I0729 18:39:07.592431 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 24/120
	I0729 18:39:08.593938 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 25/120
	I0729 18:39:09.595109 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 26/120
	I0729 18:39:10.596490 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 27/120
	I0729 18:39:11.598530 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 28/120
	I0729 18:39:12.600352 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 29/120
	I0729 18:39:13.602756 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 30/120
	I0729 18:39:14.604050 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 31/120
	I0729 18:39:15.606067 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 32/120
	I0729 18:39:16.607438 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 33/120
	I0729 18:39:17.608904 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 34/120
	I0729 18:39:18.610934 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 35/120
	I0729 18:39:19.612337 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 36/120
	I0729 18:39:20.613882 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 37/120
	I0729 18:39:21.616153 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 38/120
	I0729 18:39:22.618178 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 39/120
	I0729 18:39:23.620409 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 40/120
	I0729 18:39:24.621916 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 41/120
	I0729 18:39:25.623250 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 42/120
	I0729 18:39:26.624477 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 43/120
	I0729 18:39:27.625829 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 44/120
	I0729 18:39:28.628151 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 45/120
	I0729 18:39:29.629512 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 46/120
	I0729 18:39:30.630886 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 47/120
	I0729 18:39:31.632156 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 48/120
	I0729 18:39:32.634528 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 49/120
	I0729 18:39:33.636388 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 50/120
	I0729 18:39:34.637681 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 51/120
	I0729 18:39:35.639383 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 52/120
	I0729 18:39:36.641577 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 53/120
	I0729 18:39:37.643120 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 54/120
	I0729 18:39:38.644954 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 55/120
	I0729 18:39:39.646536 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 56/120
	I0729 18:39:40.647860 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 57/120
	I0729 18:39:41.649577 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 58/120
	I0729 18:39:42.650891 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 59/120
	I0729 18:39:43.652691 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 60/120
	I0729 18:39:44.654533 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 61/120
	I0729 18:39:45.655833 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 62/120
	I0729 18:39:46.657292 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 63/120
	I0729 18:39:47.658664 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 64/120
	I0729 18:39:48.660116 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 65/120
	I0729 18:39:49.661467 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 66/120
	I0729 18:39:50.662809 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 67/120
	I0729 18:39:51.664653 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 68/120
	I0729 18:39:52.666953 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 69/120
	I0729 18:39:53.668920 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 70/120
	I0729 18:39:54.670227 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 71/120
	I0729 18:39:55.671820 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 72/120
	I0729 18:39:56.673462 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 73/120
	I0729 18:39:57.674760 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 74/120
	I0729 18:39:58.676092 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 75/120
	I0729 18:39:59.677431 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 76/120
	I0729 18:40:00.678954 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 77/120
	I0729 18:40:01.680623 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 78/120
	I0729 18:40:02.682056 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 79/120
	I0729 18:40:03.684235 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 80/120
	I0729 18:40:04.686577 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 81/120
	I0729 18:40:05.687952 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 82/120
	I0729 18:40:06.689484 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 83/120
	I0729 18:40:07.690818 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 84/120
	I0729 18:40:08.692312 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 85/120
	I0729 18:40:09.693656 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 86/120
	I0729 18:40:10.695063 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 87/120
	I0729 18:40:11.697391 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 88/120
	I0729 18:40:12.698823 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 89/120
	I0729 18:40:13.700494 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 90/120
	I0729 18:40:14.701941 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 91/120
	I0729 18:40:15.703831 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 92/120
	I0729 18:40:16.705279 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 93/120
	I0729 18:40:17.706637 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 94/120
	I0729 18:40:18.708294 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 95/120
	I0729 18:40:19.709602 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 96/120
	I0729 18:40:20.711015 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 97/120
	I0729 18:40:21.713576 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 98/120
	I0729 18:40:22.714909 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 99/120
	I0729 18:40:23.717013 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 100/120
	I0729 18:40:24.718419 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 101/120
	I0729 18:40:25.720112 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 102/120
	I0729 18:40:26.721516 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 103/120
	I0729 18:40:27.723091 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 104/120
	I0729 18:40:28.725135 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 105/120
	I0729 18:40:29.726544 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 106/120
	I0729 18:40:30.728084 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 107/120
	I0729 18:40:31.729806 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 108/120
	I0729 18:40:32.731296 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 109/120
	I0729 18:40:33.733593 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 110/120
	I0729 18:40:34.735179 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 111/120
	I0729 18:40:35.737477 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 112/120
	I0729 18:40:36.739656 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 113/120
	I0729 18:40:37.741218 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 114/120
	I0729 18:40:38.743112 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 115/120
	I0729 18:40:39.744534 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 116/120
	I0729 18:40:40.746406 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 117/120
	I0729 18:40:41.748030 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 118/120
	I0729 18:40:42.749451 1077290 main.go:141] libmachine: (ha-344156-m02) Waiting for machine to stop 119/120
	I0729 18:40:43.750750 1077290 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:40:43.750992 1077290 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-344156 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
E0729 18:40:44.810062 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (19.069603964s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:40:43.798967 1077735 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:40:43.799071 1077735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:40:43.799079 1077735 out.go:304] Setting ErrFile to fd 2...
	I0729 18:40:43.799083 1077735 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:40:43.799265 1077735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:40:43.799469 1077735 out.go:298] Setting JSON to false
	I0729 18:40:43.799504 1077735 mustload.go:65] Loading cluster: ha-344156
	I0729 18:40:43.799615 1077735 notify.go:220] Checking for updates...
	I0729 18:40:43.799971 1077735 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:40:43.799994 1077735 status.go:255] checking status of ha-344156 ...
	I0729 18:40:43.800438 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:43.800480 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:43.816649 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0729 18:40:43.817066 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:43.817732 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:43.817764 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:43.818234 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:43.818432 1077735 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:40:43.820085 1077735 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:40:43.820104 1077735 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:40:43.820405 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:43.820451 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:43.834990 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I0729 18:40:43.835381 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:43.835913 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:43.835945 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:43.836315 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:43.836514 1077735 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:40:43.839311 1077735 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:40:43.839702 1077735 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:40:43.839728 1077735 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:40:43.839826 1077735 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:40:43.840128 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:43.840173 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:43.856125 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0729 18:40:43.856666 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:43.857222 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:43.857248 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:43.857633 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:43.857857 1077735 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:40:43.858102 1077735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:40:43.858163 1077735 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:40:43.861009 1077735 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:40:43.861518 1077735 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:40:43.861552 1077735 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:40:43.861694 1077735 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:40:43.861867 1077735 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:40:43.862021 1077735 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:40:43.862161 1077735 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:40:43.947888 1077735 ssh_runner.go:195] Run: systemctl --version
	I0729 18:40:43.954459 1077735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:40:43.971552 1077735 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:40:43.971583 1077735 api_server.go:166] Checking apiserver status ...
	I0729 18:40:43.971623 1077735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:40:43.985538 1077735 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:40:43.994267 1077735 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:40:43.994321 1077735 ssh_runner.go:195] Run: ls
	I0729 18:40:43.998426 1077735 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:40:44.002670 1077735 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:40:44.002691 1077735 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:40:44.002701 1077735 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:40:44.002719 1077735 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:40:44.003063 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:44.003129 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:44.018236 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I0729 18:40:44.018699 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:44.019292 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:44.019317 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:44.019705 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:44.019952 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:40:44.021582 1077735 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:40:44.021603 1077735 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:40:44.021992 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:44.022037 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:44.036680 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46077
	I0729 18:40:44.037058 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:44.037502 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:44.037526 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:44.037858 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:44.038036 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:40:44.040546 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:40:44.040910 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:40:44.040939 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:40:44.041010 1077735 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:40:44.041316 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:40:44.041352 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:40:44.055756 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0729 18:40:44.056219 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:40:44.056733 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:40:44.056751 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:40:44.057050 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:40:44.057224 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:40:44.057418 1077735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:40:44.057441 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:40:44.059806 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:40:44.060291 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:40:44.060317 1077735 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:40:44.060434 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:40:44.060597 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:40:44.060739 1077735 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:40:44.060891 1077735 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:02.455081 1077735 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:02.455217 1077735 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:02.455239 1077735 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:02.455247 1077735 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:02.455271 1077735 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:02.455285 1077735 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:02.455664 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.455729 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.471254 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
	I0729 18:41:02.471769 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.472229 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.472250 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.472605 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.472822 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:02.474243 1077735 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:02.474258 1077735 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:02.474572 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.474610 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.489028 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46603
	I0729 18:41:02.489399 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.489852 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.489877 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.490193 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.490388 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:02.492828 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:02.493181 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:02.493242 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:02.493353 1077735 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:02.493749 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.493816 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.508560 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40871
	I0729 18:41:02.508923 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.509421 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.509443 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.509763 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.509943 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:02.510140 1077735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:02.510169 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:02.512691 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:02.513073 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:02.513106 1077735 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:02.513245 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:02.513415 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:02.513559 1077735 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:02.513667 1077735 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:02.601218 1077735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:02.622010 1077735 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:02.622049 1077735 api_server.go:166] Checking apiserver status ...
	I0729 18:41:02.622085 1077735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:02.637974 1077735 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:02.648875 1077735 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:02.648924 1077735 ssh_runner.go:195] Run: ls
	I0729 18:41:02.653851 1077735 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:02.657986 1077735 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:02.658008 1077735 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:02.658017 1077735 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:02.658033 1077735 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:02.658325 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.658362 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.673992 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I0729 18:41:02.674388 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.674877 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.674902 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.675216 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.675433 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:02.677001 1077735 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:02.677021 1077735 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:02.677305 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.677339 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.692311 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0729 18:41:02.692793 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.693290 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.693317 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.693644 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.693830 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:02.696659 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:02.697069 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:02.697102 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:02.697246 1077735 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:02.697565 1077735 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:02.697606 1077735 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:02.712325 1077735 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35025
	I0729 18:41:02.712707 1077735 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:02.713201 1077735 main.go:141] libmachine: Using API Version  1
	I0729 18:41:02.713221 1077735 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:02.713510 1077735 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:02.713734 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:02.713946 1077735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:02.713973 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:02.716373 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:02.716730 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:02.716754 1077735 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:02.716903 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:02.717064 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:02.717185 1077735 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:02.717301 1077735 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:02.803843 1077735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:02.820537 1077735 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344156 -n ha-344156
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344156 logs -n 25: (1.376960939s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m03_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m04 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp testdata/cp-test.txt                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m04_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03:/home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m03 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-344156 node stop m02 -v=7                                                    | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:33:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:33:44.956754 1073226 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:33:44.956879 1073226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:44.956890 1073226 out.go:304] Setting ErrFile to fd 2...
	I0729 18:33:44.956895 1073226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:44.957089 1073226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:33:44.957689 1073226 out.go:298] Setting JSON to false
	I0729 18:33:44.958601 1073226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8177,"bootTime":1722269848,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:33:44.958664 1073226 start.go:139] virtualization: kvm guest
	I0729 18:33:44.962858 1073226 out.go:177] * [ha-344156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:33:44.964191 1073226 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:33:44.964274 1073226 notify.go:220] Checking for updates...
	I0729 18:33:44.966653 1073226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:33:44.967966 1073226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:33:44.969178 1073226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:44.970424 1073226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:33:44.971709 1073226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:33:44.973126 1073226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:33:45.008222 1073226 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:33:45.009410 1073226 start.go:297] selected driver: kvm2
	I0729 18:33:45.009421 1073226 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:33:45.009431 1073226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:33:45.010317 1073226 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:33:45.010430 1073226 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:33:45.025556 1073226 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:33:45.025607 1073226 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:33:45.025866 1073226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:45.025894 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:33:45.025901 1073226 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 18:33:45.025909 1073226 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 18:33:45.025962 1073226 start.go:340] cluster config:
	{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 18:33:45.026050 1073226 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:33:45.027789 1073226 out.go:177] * Starting "ha-344156" primary control-plane node in "ha-344156" cluster
	I0729 18:33:45.028925 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:33:45.028954 1073226 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:33:45.028962 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:33:45.029048 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:33:45.029058 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:33:45.029409 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:33:45.029433 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json: {Name:mkf6d6544dd7aac4d55600f702d47db47308cd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:33:45.029574 1073226 start.go:360] acquireMachinesLock for ha-344156: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:33:45.029602 1073226 start.go:364] duration metric: took 14.977µs to acquireMachinesLock for "ha-344156"
	I0729 18:33:45.029619 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:33:45.029673 1073226 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:33:45.031240 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:33:45.031436 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:45.031491 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:45.046106 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0729 18:33:45.046612 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:45.047145 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:33:45.047186 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:45.047512 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:45.047660 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:33:45.047814 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:33:45.047948 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:33:45.047977 1073226 client.go:168] LocalClient.Create starting
	I0729 18:33:45.048010 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:33:45.048044 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:33:45.048059 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:33:45.048139 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:33:45.048161 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:33:45.048171 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:33:45.048193 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:33:45.048206 1073226 main.go:141] libmachine: (ha-344156) Calling .PreCreateCheck
	I0729 18:33:45.048544 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:33:45.048905 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:33:45.048918 1073226 main.go:141] libmachine: (ha-344156) Calling .Create
	I0729 18:33:45.049032 1073226 main.go:141] libmachine: (ha-344156) Creating KVM machine...
	I0729 18:33:45.050208 1073226 main.go:141] libmachine: (ha-344156) DBG | found existing default KVM network
	I0729 18:33:45.050974 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.050809 1073248 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0729 18:33:45.051004 1073226 main.go:141] libmachine: (ha-344156) DBG | created network xml: 
	I0729 18:33:45.051022 1073226 main.go:141] libmachine: (ha-344156) DBG | <network>
	I0729 18:33:45.051032 1073226 main.go:141] libmachine: (ha-344156) DBG |   <name>mk-ha-344156</name>
	I0729 18:33:45.051049 1073226 main.go:141] libmachine: (ha-344156) DBG |   <dns enable='no'/>
	I0729 18:33:45.051057 1073226 main.go:141] libmachine: (ha-344156) DBG |   
	I0729 18:33:45.051062 1073226 main.go:141] libmachine: (ha-344156) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:33:45.051070 1073226 main.go:141] libmachine: (ha-344156) DBG |     <dhcp>
	I0729 18:33:45.051082 1073226 main.go:141] libmachine: (ha-344156) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:33:45.051090 1073226 main.go:141] libmachine: (ha-344156) DBG |     </dhcp>
	I0729 18:33:45.051095 1073226 main.go:141] libmachine: (ha-344156) DBG |   </ip>
	I0729 18:33:45.051103 1073226 main.go:141] libmachine: (ha-344156) DBG |   
	I0729 18:33:45.051113 1073226 main.go:141] libmachine: (ha-344156) DBG | </network>
	I0729 18:33:45.051125 1073226 main.go:141] libmachine: (ha-344156) DBG | 
	I0729 18:33:45.055990 1073226 main.go:141] libmachine: (ha-344156) DBG | trying to create private KVM network mk-ha-344156 192.168.39.0/24...
	I0729 18:33:45.121585 1073226 main.go:141] libmachine: (ha-344156) DBG | private KVM network mk-ha-344156 192.168.39.0/24 created
	I0729 18:33:45.121632 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.121561 1073248 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:45.121644 1073226 main.go:141] libmachine: (ha-344156) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 ...
	I0729 18:33:45.121665 1073226 main.go:141] libmachine: (ha-344156) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:33:45.121741 1073226 main.go:141] libmachine: (ha-344156) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:33:45.388910 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.388775 1073248 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa...
	I0729 18:33:45.441787 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.441618 1073248 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/ha-344156.rawdisk...
	I0729 18:33:45.441820 1073226 main.go:141] libmachine: (ha-344156) DBG | Writing magic tar header
	I0729 18:33:45.441868 1073226 main.go:141] libmachine: (ha-344156) DBG | Writing SSH key tar header
	I0729 18:33:45.441930 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.441754 1073248 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 ...
	I0729 18:33:45.441949 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 (perms=drwx------)
	I0729 18:33:45.441967 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:33:45.441986 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:33:45.442015 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156
	I0729 18:33:45.442034 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:33:45.442043 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:33:45.442053 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:33:45.442059 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:33:45.442068 1073226 main.go:141] libmachine: (ha-344156) Creating domain...
	I0729 18:33:45.442078 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:45.442085 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:33:45.442090 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:33:45.442099 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:33:45.442104 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home
	I0729 18:33:45.442111 1073226 main.go:141] libmachine: (ha-344156) DBG | Skipping /home - not owner
	I0729 18:33:45.443198 1073226 main.go:141] libmachine: (ha-344156) define libvirt domain using xml: 
	I0729 18:33:45.443220 1073226 main.go:141] libmachine: (ha-344156) <domain type='kvm'>
	I0729 18:33:45.443228 1073226 main.go:141] libmachine: (ha-344156)   <name>ha-344156</name>
	I0729 18:33:45.443233 1073226 main.go:141] libmachine: (ha-344156)   <memory unit='MiB'>2200</memory>
	I0729 18:33:45.443246 1073226 main.go:141] libmachine: (ha-344156)   <vcpu>2</vcpu>
	I0729 18:33:45.443261 1073226 main.go:141] libmachine: (ha-344156)   <features>
	I0729 18:33:45.443272 1073226 main.go:141] libmachine: (ha-344156)     <acpi/>
	I0729 18:33:45.443278 1073226 main.go:141] libmachine: (ha-344156)     <apic/>
	I0729 18:33:45.443287 1073226 main.go:141] libmachine: (ha-344156)     <pae/>
	I0729 18:33:45.443298 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443307 1073226 main.go:141] libmachine: (ha-344156)   </features>
	I0729 18:33:45.443318 1073226 main.go:141] libmachine: (ha-344156)   <cpu mode='host-passthrough'>
	I0729 18:33:45.443326 1073226 main.go:141] libmachine: (ha-344156)   
	I0729 18:33:45.443333 1073226 main.go:141] libmachine: (ha-344156)   </cpu>
	I0729 18:33:45.443338 1073226 main.go:141] libmachine: (ha-344156)   <os>
	I0729 18:33:45.443343 1073226 main.go:141] libmachine: (ha-344156)     <type>hvm</type>
	I0729 18:33:45.443348 1073226 main.go:141] libmachine: (ha-344156)     <boot dev='cdrom'/>
	I0729 18:33:45.443355 1073226 main.go:141] libmachine: (ha-344156)     <boot dev='hd'/>
	I0729 18:33:45.443360 1073226 main.go:141] libmachine: (ha-344156)     <bootmenu enable='no'/>
	I0729 18:33:45.443372 1073226 main.go:141] libmachine: (ha-344156)   </os>
	I0729 18:33:45.443449 1073226 main.go:141] libmachine: (ha-344156)   <devices>
	I0729 18:33:45.443474 1073226 main.go:141] libmachine: (ha-344156)     <disk type='file' device='cdrom'>
	I0729 18:33:45.443490 1073226 main.go:141] libmachine: (ha-344156)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/boot2docker.iso'/>
	I0729 18:33:45.443503 1073226 main.go:141] libmachine: (ha-344156)       <target dev='hdc' bus='scsi'/>
	I0729 18:33:45.443513 1073226 main.go:141] libmachine: (ha-344156)       <readonly/>
	I0729 18:33:45.443524 1073226 main.go:141] libmachine: (ha-344156)     </disk>
	I0729 18:33:45.443538 1073226 main.go:141] libmachine: (ha-344156)     <disk type='file' device='disk'>
	I0729 18:33:45.443555 1073226 main.go:141] libmachine: (ha-344156)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:33:45.443572 1073226 main.go:141] libmachine: (ha-344156)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/ha-344156.rawdisk'/>
	I0729 18:33:45.443583 1073226 main.go:141] libmachine: (ha-344156)       <target dev='hda' bus='virtio'/>
	I0729 18:33:45.443592 1073226 main.go:141] libmachine: (ha-344156)     </disk>
	I0729 18:33:45.443604 1073226 main.go:141] libmachine: (ha-344156)     <interface type='network'>
	I0729 18:33:45.443616 1073226 main.go:141] libmachine: (ha-344156)       <source network='mk-ha-344156'/>
	I0729 18:33:45.443631 1073226 main.go:141] libmachine: (ha-344156)       <model type='virtio'/>
	I0729 18:33:45.443643 1073226 main.go:141] libmachine: (ha-344156)     </interface>
	I0729 18:33:45.443653 1073226 main.go:141] libmachine: (ha-344156)     <interface type='network'>
	I0729 18:33:45.443666 1073226 main.go:141] libmachine: (ha-344156)       <source network='default'/>
	I0729 18:33:45.443674 1073226 main.go:141] libmachine: (ha-344156)       <model type='virtio'/>
	I0729 18:33:45.443686 1073226 main.go:141] libmachine: (ha-344156)     </interface>
	I0729 18:33:45.443696 1073226 main.go:141] libmachine: (ha-344156)     <serial type='pty'>
	I0729 18:33:45.443709 1073226 main.go:141] libmachine: (ha-344156)       <target port='0'/>
	I0729 18:33:45.443722 1073226 main.go:141] libmachine: (ha-344156)     </serial>
	I0729 18:33:45.443733 1073226 main.go:141] libmachine: (ha-344156)     <console type='pty'>
	I0729 18:33:45.443750 1073226 main.go:141] libmachine: (ha-344156)       <target type='serial' port='0'/>
	I0729 18:33:45.443759 1073226 main.go:141] libmachine: (ha-344156)     </console>
	I0729 18:33:45.443768 1073226 main.go:141] libmachine: (ha-344156)     <rng model='virtio'>
	I0729 18:33:45.443784 1073226 main.go:141] libmachine: (ha-344156)       <backend model='random'>/dev/random</backend>
	I0729 18:33:45.443795 1073226 main.go:141] libmachine: (ha-344156)     </rng>
	I0729 18:33:45.443805 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443816 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443825 1073226 main.go:141] libmachine: (ha-344156)   </devices>
	I0729 18:33:45.443834 1073226 main.go:141] libmachine: (ha-344156) </domain>
	I0729 18:33:45.443844 1073226 main.go:141] libmachine: (ha-344156) 
	I0729 18:33:45.448111 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:bd:f4:5c in network default
	I0729 18:33:45.448675 1073226 main.go:141] libmachine: (ha-344156) Ensuring networks are active...
	I0729 18:33:45.448699 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:45.449441 1073226 main.go:141] libmachine: (ha-344156) Ensuring network default is active
	I0729 18:33:45.449740 1073226 main.go:141] libmachine: (ha-344156) Ensuring network mk-ha-344156 is active
	I0729 18:33:45.450303 1073226 main.go:141] libmachine: (ha-344156) Getting domain xml...
	I0729 18:33:45.451048 1073226 main.go:141] libmachine: (ha-344156) Creating domain...
	I0729 18:33:46.632599 1073226 main.go:141] libmachine: (ha-344156) Waiting to get IP...
	I0729 18:33:46.633501 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:46.633943 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:46.633985 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:46.633927 1073248 retry.go:31] will retry after 264.543199ms: waiting for machine to come up
	I0729 18:33:46.900432 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:46.900963 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:46.900993 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:46.900913 1073248 retry.go:31] will retry after 383.267628ms: waiting for machine to come up
	I0729 18:33:47.285434 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:47.285878 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:47.285906 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:47.285831 1073248 retry.go:31] will retry after 486.285941ms: waiting for machine to come up
	I0729 18:33:47.773287 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:47.773679 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:47.773735 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:47.773661 1073248 retry.go:31] will retry after 584.973906ms: waiting for machine to come up
	I0729 18:33:48.360407 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:48.360792 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:48.360815 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:48.360754 1073248 retry.go:31] will retry after 756.105052ms: waiting for machine to come up
	I0729 18:33:49.118682 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:49.119088 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:49.119115 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:49.119052 1073248 retry.go:31] will retry after 664.094058ms: waiting for machine to come up
	I0729 18:33:49.784908 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:49.785276 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:49.785308 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:49.785225 1073248 retry.go:31] will retry after 904.653048ms: waiting for machine to come up
	I0729 18:33:50.691837 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:50.692222 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:50.692253 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:50.692175 1073248 retry.go:31] will retry after 1.274490726s: waiting for machine to come up
	I0729 18:33:51.968520 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:51.968880 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:51.968921 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:51.968858 1073248 retry.go:31] will retry after 1.625342059s: waiting for machine to come up
	I0729 18:33:53.596639 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:53.596976 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:53.597006 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:53.596958 1073248 retry.go:31] will retry after 1.621283615s: waiting for machine to come up
	I0729 18:33:55.219632 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:55.220126 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:55.220156 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:55.220035 1073248 retry.go:31] will retry after 2.839272433s: waiting for machine to come up
	I0729 18:33:58.062920 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:58.063299 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:58.063350 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:58.063254 1073248 retry.go:31] will retry after 3.17863945s: waiting for machine to come up
	I0729 18:34:01.244084 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:01.244458 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:34:01.244503 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:34:01.244448 1073248 retry.go:31] will retry after 3.552012439s: waiting for machine to come up
	I0729 18:34:04.800153 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.800447 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has current primary IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.800466 1073226 main.go:141] libmachine: (ha-344156) Found IP for machine: 192.168.39.225
	I0729 18:34:04.800474 1073226 main.go:141] libmachine: (ha-344156) Reserving static IP address...
	I0729 18:34:04.800899 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find host DHCP lease matching {name: "ha-344156", mac: "52:54:00:a1:fc:98", ip: "192.168.39.225"} in network mk-ha-344156
	I0729 18:34:04.870193 1073226 main.go:141] libmachine: (ha-344156) DBG | Getting to WaitForSSH function...
	I0729 18:34:04.870226 1073226 main.go:141] libmachine: (ha-344156) Reserved static IP address: 192.168.39.225
	I0729 18:34:04.870239 1073226 main.go:141] libmachine: (ha-344156) Waiting for SSH to be available...
	I0729 18:34:04.872853 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.873272 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:04.873312 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.873410 1073226 main.go:141] libmachine: (ha-344156) DBG | Using SSH client type: external
	I0729 18:34:04.873430 1073226 main.go:141] libmachine: (ha-344156) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa (-rw-------)
	I0729 18:34:04.873457 1073226 main.go:141] libmachine: (ha-344156) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:34:04.873469 1073226 main.go:141] libmachine: (ha-344156) DBG | About to run SSH command:
	I0729 18:34:04.873481 1073226 main.go:141] libmachine: (ha-344156) DBG | exit 0
	I0729 18:34:05.002955 1073226 main.go:141] libmachine: (ha-344156) DBG | SSH cmd err, output: <nil>: 
	I0729 18:34:05.003249 1073226 main.go:141] libmachine: (ha-344156) KVM machine creation complete!
	I0729 18:34:05.003522 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:34:05.004152 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.004340 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.004497 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:34:05.004514 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:05.005599 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:34:05.005610 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:34:05.005615 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:34:05.005621 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.007973 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.008347 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.008368 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.008493 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.008679 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.008817 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.008940 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.009073 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.009308 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.009320 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:34:05.117879 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:05.117908 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:34:05.117918 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.120495 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.120865 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.120901 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.121050 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.121258 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.121459 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.121549 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.121698 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.121888 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.121899 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:34:05.231446 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:34:05.231557 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:34:05.231574 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:34:05.231586 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.231864 1073226 buildroot.go:166] provisioning hostname "ha-344156"
	I0729 18:34:05.231896 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.232058 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.235039 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.235412 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.235435 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.235576 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.235766 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.235905 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.236047 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.236212 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.236374 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.236384 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156 && echo "ha-344156" | sudo tee /etc/hostname
	I0729 18:34:05.361117 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:34:05.361159 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.364342 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.364752 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.364777 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.364946 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.365118 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.365291 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.365469 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.365647 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.365873 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.365898 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:34:05.483985 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:05.484019 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:34:05.484060 1073226 buildroot.go:174] setting up certificates
	I0729 18:34:05.484075 1073226 provision.go:84] configureAuth start
	I0729 18:34:05.484086 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.484414 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:05.486738 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.487103 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.487131 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.487226 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.489454 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.489769 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.489791 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.489932 1073226 provision.go:143] copyHostCerts
	I0729 18:34:05.489960 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:05.490007 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:34:05.490023 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:05.490093 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:34:05.490166 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:05.490183 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:34:05.490190 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:05.490212 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:34:05.490250 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:05.490266 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:34:05.490272 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:05.490291 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:34:05.490335 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156 san=[127.0.0.1 192.168.39.225 ha-344156 localhost minikube]
	I0729 18:34:05.532036 1073226 provision.go:177] copyRemoteCerts
	I0729 18:34:05.532097 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:34:05.532122 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.534466 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.534802 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.534827 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.535008 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.535193 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.535371 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.535493 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:05.620611 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:34:05.620695 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 18:34:05.644122 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:34:05.644195 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:34:05.666545 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:34:05.666613 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:34:05.689172 1073226 provision.go:87] duration metric: took 205.084167ms to configureAuth
	I0729 18:34:05.689197 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:34:05.689360 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:05.689437 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.691785 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.692147 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.692180 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.692337 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.692538 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.692752 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.692918 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.693107 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.693373 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.693401 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:34:05.960320 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:34:05.960352 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:34:05.960365 1073226 main.go:141] libmachine: (ha-344156) Calling .GetURL
	I0729 18:34:05.961814 1073226 main.go:141] libmachine: (ha-344156) DBG | Using libvirt version 6000000
	I0729 18:34:05.965439 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.965781 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.965803 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.965975 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:34:05.965992 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:34:05.966002 1073226 client.go:171] duration metric: took 20.918013542s to LocalClient.Create
	I0729 18:34:05.966048 1073226 start.go:167] duration metric: took 20.918085573s to libmachine.API.Create "ha-344156"
	I0729 18:34:05.966060 1073226 start.go:293] postStartSetup for "ha-344156" (driver="kvm2")
	I0729 18:34:05.966074 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:34:05.966100 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.966359 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:34:05.966385 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.968664 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.968985 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.969010 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.969120 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.969285 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.969457 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.969573 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.052579 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:34:06.056498 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:34:06.056521 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:34:06.056575 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:34:06.056645 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:34:06.056655 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:34:06.056748 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:34:06.065426 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:06.088580 1073226 start.go:296] duration metric: took 122.504862ms for postStartSetup
	I0729 18:34:06.088626 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:34:06.089205 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:06.091764 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.092108 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.092128 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.092380 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:06.092592 1073226 start.go:128] duration metric: took 21.062906887s to createHost
	I0729 18:34:06.092623 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.095129 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.095660 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.095694 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.095859 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.096050 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.096211 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.096346 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.096533 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:06.096754 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:06.096765 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:34:06.207454 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278046.180028938
	
	I0729 18:34:06.207473 1073226 fix.go:216] guest clock: 1722278046.180028938
	I0729 18:34:06.207480 1073226 fix.go:229] Guest: 2024-07-29 18:34:06.180028938 +0000 UTC Remote: 2024-07-29 18:34:06.092612562 +0000 UTC m=+21.170361798 (delta=87.416376ms)
	I0729 18:34:06.207500 1073226 fix.go:200] guest clock delta is within tolerance: 87.416376ms
	I0729 18:34:06.207506 1073226 start.go:83] releasing machines lock for "ha-344156", held for 21.177894829s
	I0729 18:34:06.207523 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.207808 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:06.210148 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.210520 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.210554 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.210697 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211222 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211386 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211463 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:34:06.211534 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.211595 1073226 ssh_runner.go:195] Run: cat /version.json
	I0729 18:34:06.211618 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.214204 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214471 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214508 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.214529 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214710 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.214801 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.214837 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214869 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.215020 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.215039 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.215222 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.215266 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.215480 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.215621 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.313765 1073226 ssh_runner.go:195] Run: systemctl --version
	I0729 18:34:06.319503 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:34:06.485096 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:34:06.490916 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:34:06.490981 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:34:06.506313 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:34:06.506334 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:34:06.506394 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:34:06.522531 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:34:06.535576 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:34:06.535636 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:34:06.549116 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:34:06.561985 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:34:06.671576 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:34:06.831892 1073226 docker.go:233] disabling docker service ...
	I0729 18:34:06.831982 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:34:06.845723 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:34:06.857876 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:34:06.973209 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:34:07.092831 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:34:07.106430 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:34:07.124072 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:34:07.124150 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.133862 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:34:07.133943 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.143441 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.153162 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.162566 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:34:07.172440 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.182024 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.198170 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.207822 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:34:07.216435 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:34:07.216495 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:34:07.228514 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:34:07.237027 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:34:07.354002 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:34:07.481722 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:34:07.481806 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:34:07.486473 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:34:07.486542 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:34:07.490123 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:34:07.528480 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:34:07.528552 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:34:07.555165 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:34:07.587500 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:34:07.588706 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:07.591393 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:07.591687 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:07.591710 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:07.591893 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:34:07.595977 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:34:07.608866 1073226 kubeadm.go:883] updating cluster {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:34:07.608987 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:34:07.609053 1073226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:34:07.643225 1073226 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:34:07.643297 1073226 ssh_runner.go:195] Run: which lz4
	I0729 18:34:07.647020 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 18:34:07.647116 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:34:07.650921 1073226 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:34:07.650938 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:34:09.035778 1073226 crio.go:462] duration metric: took 1.388694553s to copy over tarball
	I0729 18:34:09.035850 1073226 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:34:11.118456 1073226 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082571242s)
	I0729 18:34:11.118502 1073226 crio.go:469] duration metric: took 2.082695207s to extract the tarball
	I0729 18:34:11.118511 1073226 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:34:11.156422 1073226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:34:11.201237 1073226 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:34:11.201262 1073226 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:34:11.201271 1073226 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.30.3 crio true true} ...
	I0729 18:34:11.201394 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:34:11.201457 1073226 ssh_runner.go:195] Run: crio config
	I0729 18:34:11.247713 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:34:11.247735 1073226 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 18:34:11.247748 1073226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:34:11.247772 1073226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344156 NodeName:ha-344156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:34:11.247921 1073226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344156"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:34:11.247947 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:34:11.247988 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:34:11.265337 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:34:11.265470 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:34:11.265533 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:34:11.275261 1073226 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:34:11.275332 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:34:11.284505 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:34:11.299964 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:34:11.315409 1073226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:34:11.331098 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 18:34:11.346943 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:34:11.350618 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:34:11.362526 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:34:11.497774 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:34:11.515028 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.225
	I0729 18:34:11.515052 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:34:11.515074 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.515269 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:34:11.515321 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:34:11.515334 1073226 certs.go:256] generating profile certs ...
	I0729 18:34:11.515399 1073226 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:34:11.515417 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt with IP's: []
	I0729 18:34:11.629698 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt ...
	I0729 18:34:11.629729 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt: {Name:mkcf0c8c421e3bc745f4d659be88beb13d3c52c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.629896 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key ...
	I0729 18:34:11.629907 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key: {Name:mk2ae492368446d4d6f640a1412db71e679b6a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.629979 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41
	I0729 18:34:11.629994 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.254]
	I0729 18:34:11.780702 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 ...
	I0729 18:34:11.780733 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41: {Name:mk991287ed1b0820e95f5e1a7369781640893f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.780919 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41 ...
	I0729 18:34:11.780938 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41: {Name:mkbed67947aaf2a97af660c4e19dee0b6f97094e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.781034 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:34:11.781171 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:34:11.781264 1073226 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:34:11.781286 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt with IP's: []
	I0729 18:34:11.881219 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt ...
	I0729 18:34:11.881249 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt: {Name:mkb3a421c339103c151b47edbb3d670b9b496119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.881438 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key ...
	I0729 18:34:11.881456 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key: {Name:mk01aa350b22815cf8b5491d5ee4dc3c4eb9ac9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.881548 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:34:11.881572 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:34:11.881590 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:34:11.881614 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:34:11.881632 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:34:11.881649 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:34:11.881662 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:34:11.881677 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:34:11.881743 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:34:11.881792 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:34:11.881806 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:34:11.881838 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:34:11.881891 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:34:11.881936 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:34:11.881990 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:11.882036 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:34:11.882057 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:11.882076 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:34:11.882749 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:34:11.908029 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:34:11.931249 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:34:11.953266 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:34:11.975192 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:34:11.997106 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:34:12.018602 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:34:12.040428 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:34:12.062433 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:34:12.084409 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:34:12.106518 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:34:12.128525 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:34:12.144283 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:34:12.149798 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:34:12.160166 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.164365 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.164420 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.170275 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:34:12.181035 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:34:12.192184 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.196929 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.196984 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.202832 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:34:12.213379 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:34:12.223777 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.228130 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.228177 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.233665 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:34:12.244037 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:34:12.247824 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:34:12.247888 1073226 kubeadm.go:392] StartCluster: {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:34:12.247995 1073226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:34:12.248053 1073226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:34:12.283862 1073226 cri.go:89] found id: ""
	I0729 18:34:12.283951 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:34:12.296914 1073226 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:34:12.307181 1073226 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:34:12.324602 1073226 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:34:12.324618 1073226 kubeadm.go:157] found existing configuration files:
	
	I0729 18:34:12.324657 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:34:12.333200 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:34:12.333246 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:34:12.342145 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:34:12.355917 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:34:12.355962 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:34:12.370171 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:34:12.379020 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:34:12.379056 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:34:12.387989 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:34:12.396846 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:34:12.396874 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:34:12.405760 1073226 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:34:12.642376 1073226 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:34:23.597640 1073226 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:34:23.597724 1073226 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:34:23.597787 1073226 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:34:23.597867 1073226 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:34:23.597982 1073226 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:34:23.598060 1073226 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:34:23.599582 1073226 out.go:204]   - Generating certificates and keys ...
	I0729 18:34:23.599687 1073226 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:34:23.599784 1073226 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:34:23.599878 1073226 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:34:23.599960 1073226 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:34:23.600047 1073226 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:34:23.600119 1073226 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:34:23.600194 1073226 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:34:23.600359 1073226 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-344156 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0729 18:34:23.600430 1073226 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:34:23.600585 1073226 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-344156 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0729 18:34:23.600673 1073226 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:34:23.600760 1073226 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:34:23.600819 1073226 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:34:23.600908 1073226 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:34:23.601008 1073226 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:34:23.601098 1073226 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:34:23.601147 1073226 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:34:23.601205 1073226 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:34:23.601248 1073226 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:34:23.601350 1073226 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:34:23.601445 1073226 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:34:23.603392 1073226 out.go:204]   - Booting up control plane ...
	I0729 18:34:23.603489 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:34:23.603575 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:34:23.603656 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:34:23.603772 1073226 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:34:23.603909 1073226 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:34:23.603952 1073226 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:34:23.604112 1073226 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:34:23.604228 1073226 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:34:23.604289 1073226 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.400017ms
	I0729 18:34:23.604392 1073226 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:34:23.604479 1073226 kubeadm.go:310] [api-check] The API server is healthy after 5.863407237s
	I0729 18:34:23.604628 1073226 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:34:23.604806 1073226 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:34:23.604865 1073226 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:34:23.605010 1073226 kubeadm.go:310] [mark-control-plane] Marking the node ha-344156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:34:23.605056 1073226 kubeadm.go:310] [bootstrap-token] Using token: sgseks.zyny4ici27dvxrv8
	I0729 18:34:23.606101 1073226 out.go:204]   - Configuring RBAC rules ...
	I0729 18:34:23.606191 1073226 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:34:23.606263 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:34:23.606390 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:34:23.606505 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:34:23.606642 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:34:23.606756 1073226 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:34:23.606883 1073226 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:34:23.606921 1073226 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:34:23.606964 1073226 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:34:23.606970 1073226 kubeadm.go:310] 
	I0729 18:34:23.607037 1073226 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:34:23.607052 1073226 kubeadm.go:310] 
	I0729 18:34:23.607114 1073226 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:34:23.607120 1073226 kubeadm.go:310] 
	I0729 18:34:23.607158 1073226 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:34:23.607215 1073226 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:34:23.607276 1073226 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:34:23.607282 1073226 kubeadm.go:310] 
	I0729 18:34:23.607325 1073226 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:34:23.607338 1073226 kubeadm.go:310] 
	I0729 18:34:23.607377 1073226 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:34:23.607383 1073226 kubeadm.go:310] 
	I0729 18:34:23.607444 1073226 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:34:23.607509 1073226 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:34:23.607565 1073226 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:34:23.607571 1073226 kubeadm.go:310] 
	I0729 18:34:23.607639 1073226 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:34:23.607714 1073226 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:34:23.607720 1073226 kubeadm.go:310] 
	I0729 18:34:23.607806 1073226 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sgseks.zyny4ici27dvxrv8 \
	I0729 18:34:23.607922 1073226 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 18:34:23.607947 1073226 kubeadm.go:310] 	--control-plane 
	I0729 18:34:23.607955 1073226 kubeadm.go:310] 
	I0729 18:34:23.608040 1073226 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:34:23.608049 1073226 kubeadm.go:310] 
	I0729 18:34:23.608152 1073226 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sgseks.zyny4ici27dvxrv8 \
	I0729 18:34:23.608270 1073226 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 18:34:23.608289 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:34:23.608298 1073226 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 18:34:23.609560 1073226 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 18:34:23.610663 1073226 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 18:34:23.616220 1073226 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 18:34:23.616237 1073226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 18:34:23.633181 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 18:34:23.949245 1073226 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:34:23.949335 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:23.949384 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156 minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=true
	I0729 18:34:23.964512 1073226 ops.go:34] apiserver oom_adj: -16
	I0729 18:34:24.041830 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:24.542767 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:25.042127 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:25.542523 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:26.042274 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:26.541910 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:27.041923 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:27.542326 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:28.042650 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:28.541905 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:29.042202 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:29.542480 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:30.042223 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:30.542488 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:31.042271 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:31.542196 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:32.042332 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:32.542177 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:33.041866 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:33.542092 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:34.042559 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:34.542475 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.042780 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.542015 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.636835 1073226 kubeadm.go:1113] duration metric: took 11.687570186s to wait for elevateKubeSystemPrivileges
	I0729 18:34:35.636876 1073226 kubeadm.go:394] duration metric: took 23.388999178s to StartCluster
	I0729 18:34:35.636899 1073226 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:35.636988 1073226 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:34:35.637720 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:35.637945 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:34:35.637959 1073226 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:34:35.637999 1073226 addons.go:69] Setting storage-provisioner=true in profile "ha-344156"
	I0729 18:34:35.638027 1073226 addons.go:234] Setting addon storage-provisioner=true in "ha-344156"
	I0729 18:34:35.638035 1073226 addons.go:69] Setting default-storageclass=true in profile "ha-344156"
	I0729 18:34:35.637941 1073226 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:34:35.638058 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:34:35.638061 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:34:35.638094 1073226 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-344156"
	I0729 18:34:35.638151 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:35.638426 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.638466 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.638545 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.638576 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.653885 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44845
	I0729 18:34:35.653893 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0729 18:34:35.654390 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.654425 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.654907 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.654927 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.655049 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.655074 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.655286 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.655389 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.655484 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.655952 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.655988 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.657694 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:34:35.658052 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 18:34:35.658604 1073226 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 18:34:35.658972 1073226 addons.go:234] Setting addon default-storageclass=true in "ha-344156"
	I0729 18:34:35.659030 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:34:35.659406 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.659441 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.670770 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0729 18:34:35.671190 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.671657 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.671679 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.672006 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.672218 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.674044 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:35.674070 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0729 18:34:35.674454 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.674892 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.674912 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.675242 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.675794 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.675823 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.676169 1073226 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:34:35.677540 1073226 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:34:35.677564 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:34:35.677579 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:35.680423 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.680829 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:35.680855 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.681019 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:35.681182 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:35.681340 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:35.681482 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:35.693141 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I0729 18:34:35.693524 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.694037 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.694063 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.694420 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.694615 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.696046 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:35.696239 1073226 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:34:35.696252 1073226 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:34:35.696268 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:35.698730 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.699174 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:35.699203 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.699389 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:35.699579 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:35.699740 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:35.699898 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:35.758239 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:34:35.797694 1073226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:34:35.906121 1073226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:34:36.202968 1073226 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 18:34:36.547904 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.547929 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.547928 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.547951 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548258 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548354 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548368 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.548370 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548384 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548392 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548409 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.548335 1073226 main.go:141] libmachine: (ha-344156) DBG | Closing plugin on server side
	I0729 18:34:36.548438 1073226 main.go:141] libmachine: (ha-344156) DBG | Closing plugin on server side
	I0729 18:34:36.548422 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548658 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548671 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548914 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548943 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.549087 1073226 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 18:34:36.549097 1073226 round_trippers.go:469] Request Headers:
	I0729 18:34:36.549107 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:34:36.549116 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:34:36.565136 1073226 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0729 18:34:36.565928 1073226 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 18:34:36.565947 1073226 round_trippers.go:469] Request Headers:
	I0729 18:34:36.565954 1073226 round_trippers.go:473]     Content-Type: application/json
	I0729 18:34:36.565958 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:34:36.565961 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:34:36.579656 1073226 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 18:34:36.579850 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.579864 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.580137 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.580157 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.581895 1073226 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 18:34:36.583107 1073226 addons.go:510] duration metric: took 945.141264ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 18:34:36.583153 1073226 start.go:246] waiting for cluster config update ...
	I0729 18:34:36.583168 1073226 start.go:255] writing updated cluster config ...
	I0729 18:34:36.584831 1073226 out.go:177] 
	I0729 18:34:36.586335 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:36.586439 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:36.589087 1073226 out.go:177] * Starting "ha-344156-m02" control-plane node in "ha-344156" cluster
	I0729 18:34:36.590510 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:34:36.590538 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:34:36.590631 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:34:36.590648 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:34:36.590741 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:36.590953 1073226 start.go:360] acquireMachinesLock for ha-344156-m02: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:34:36.591016 1073226 start.go:364] duration metric: took 36.328µs to acquireMachinesLock for "ha-344156-m02"
	I0729 18:34:36.591040 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:34:36.591147 1073226 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 18:34:36.592716 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:34:36.592826 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:36.592861 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:36.608998 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0729 18:34:36.609514 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:36.610045 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:36.610072 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:36.610395 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:36.610583 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:36.610750 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:36.610944 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:34:36.610967 1073226 client.go:168] LocalClient.Create starting
	I0729 18:34:36.611005 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:34:36.611043 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:34:36.611065 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:34:36.611139 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:34:36.611166 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:34:36.611181 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:34:36.611207 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:34:36.611218 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .PreCreateCheck
	I0729 18:34:36.611452 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:36.611857 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:34:36.611873 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .Create
	I0729 18:34:36.612019 1073226 main.go:141] libmachine: (ha-344156-m02) Creating KVM machine...
	I0729 18:34:36.613126 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found existing default KVM network
	I0729 18:34:36.613276 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found existing private KVM network mk-ha-344156
	I0729 18:34:36.613436 1073226 main.go:141] libmachine: (ha-344156-m02) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 ...
	I0729 18:34:36.613464 1073226 main.go:141] libmachine: (ha-344156-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:34:36.613536 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.613428 1073621 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:34:36.613626 1073226 main.go:141] libmachine: (ha-344156-m02) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:34:36.890782 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.890652 1073621 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa...
	I0729 18:34:36.976727 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.976623 1073621 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/ha-344156-m02.rawdisk...
	I0729 18:34:36.976763 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Writing magic tar header
	I0729 18:34:36.976777 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Writing SSH key tar header
	I0729 18:34:36.976793 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.976756 1073621 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 ...
	I0729 18:34:36.976904 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02
	I0729 18:34:36.976940 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 (perms=drwx------)
	I0729 18:34:36.976952 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:34:36.976969 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:34:36.976983 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:34:36.976996 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:34:36.977020 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:34:36.977035 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:34:36.977040 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:34:36.977050 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:34:36.977055 1073226 main.go:141] libmachine: (ha-344156-m02) Creating domain...
	I0729 18:34:36.977061 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:34:36.977070 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:34:36.977076 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home
	I0729 18:34:36.977082 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Skipping /home - not owner
	I0729 18:34:36.978014 1073226 main.go:141] libmachine: (ha-344156-m02) define libvirt domain using xml: 
	I0729 18:34:36.978033 1073226 main.go:141] libmachine: (ha-344156-m02) <domain type='kvm'>
	I0729 18:34:36.978043 1073226 main.go:141] libmachine: (ha-344156-m02)   <name>ha-344156-m02</name>
	I0729 18:34:36.978051 1073226 main.go:141] libmachine: (ha-344156-m02)   <memory unit='MiB'>2200</memory>
	I0729 18:34:36.978059 1073226 main.go:141] libmachine: (ha-344156-m02)   <vcpu>2</vcpu>
	I0729 18:34:36.978067 1073226 main.go:141] libmachine: (ha-344156-m02)   <features>
	I0729 18:34:36.978075 1073226 main.go:141] libmachine: (ha-344156-m02)     <acpi/>
	I0729 18:34:36.978080 1073226 main.go:141] libmachine: (ha-344156-m02)     <apic/>
	I0729 18:34:36.978089 1073226 main.go:141] libmachine: (ha-344156-m02)     <pae/>
	I0729 18:34:36.978093 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978098 1073226 main.go:141] libmachine: (ha-344156-m02)   </features>
	I0729 18:34:36.978103 1073226 main.go:141] libmachine: (ha-344156-m02)   <cpu mode='host-passthrough'>
	I0729 18:34:36.978108 1073226 main.go:141] libmachine: (ha-344156-m02)   
	I0729 18:34:36.978114 1073226 main.go:141] libmachine: (ha-344156-m02)   </cpu>
	I0729 18:34:36.978122 1073226 main.go:141] libmachine: (ha-344156-m02)   <os>
	I0729 18:34:36.978131 1073226 main.go:141] libmachine: (ha-344156-m02)     <type>hvm</type>
	I0729 18:34:36.978142 1073226 main.go:141] libmachine: (ha-344156-m02)     <boot dev='cdrom'/>
	I0729 18:34:36.978154 1073226 main.go:141] libmachine: (ha-344156-m02)     <boot dev='hd'/>
	I0729 18:34:36.978162 1073226 main.go:141] libmachine: (ha-344156-m02)     <bootmenu enable='no'/>
	I0729 18:34:36.978166 1073226 main.go:141] libmachine: (ha-344156-m02)   </os>
	I0729 18:34:36.978172 1073226 main.go:141] libmachine: (ha-344156-m02)   <devices>
	I0729 18:34:36.978177 1073226 main.go:141] libmachine: (ha-344156-m02)     <disk type='file' device='cdrom'>
	I0729 18:34:36.978191 1073226 main.go:141] libmachine: (ha-344156-m02)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/boot2docker.iso'/>
	I0729 18:34:36.978202 1073226 main.go:141] libmachine: (ha-344156-m02)       <target dev='hdc' bus='scsi'/>
	I0729 18:34:36.978212 1073226 main.go:141] libmachine: (ha-344156-m02)       <readonly/>
	I0729 18:34:36.978220 1073226 main.go:141] libmachine: (ha-344156-m02)     </disk>
	I0729 18:34:36.978243 1073226 main.go:141] libmachine: (ha-344156-m02)     <disk type='file' device='disk'>
	I0729 18:34:36.978257 1073226 main.go:141] libmachine: (ha-344156-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:34:36.978267 1073226 main.go:141] libmachine: (ha-344156-m02)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/ha-344156-m02.rawdisk'/>
	I0729 18:34:36.978273 1073226 main.go:141] libmachine: (ha-344156-m02)       <target dev='hda' bus='virtio'/>
	I0729 18:34:36.978279 1073226 main.go:141] libmachine: (ha-344156-m02)     </disk>
	I0729 18:34:36.978285 1073226 main.go:141] libmachine: (ha-344156-m02)     <interface type='network'>
	I0729 18:34:36.978290 1073226 main.go:141] libmachine: (ha-344156-m02)       <source network='mk-ha-344156'/>
	I0729 18:34:36.978326 1073226 main.go:141] libmachine: (ha-344156-m02)       <model type='virtio'/>
	I0729 18:34:36.978348 1073226 main.go:141] libmachine: (ha-344156-m02)     </interface>
	I0729 18:34:36.978359 1073226 main.go:141] libmachine: (ha-344156-m02)     <interface type='network'>
	I0729 18:34:36.978373 1073226 main.go:141] libmachine: (ha-344156-m02)       <source network='default'/>
	I0729 18:34:36.978386 1073226 main.go:141] libmachine: (ha-344156-m02)       <model type='virtio'/>
	I0729 18:34:36.978395 1073226 main.go:141] libmachine: (ha-344156-m02)     </interface>
	I0729 18:34:36.978408 1073226 main.go:141] libmachine: (ha-344156-m02)     <serial type='pty'>
	I0729 18:34:36.978423 1073226 main.go:141] libmachine: (ha-344156-m02)       <target port='0'/>
	I0729 18:34:36.978435 1073226 main.go:141] libmachine: (ha-344156-m02)     </serial>
	I0729 18:34:36.978445 1073226 main.go:141] libmachine: (ha-344156-m02)     <console type='pty'>
	I0729 18:34:36.978457 1073226 main.go:141] libmachine: (ha-344156-m02)       <target type='serial' port='0'/>
	I0729 18:34:36.978467 1073226 main.go:141] libmachine: (ha-344156-m02)     </console>
	I0729 18:34:36.978483 1073226 main.go:141] libmachine: (ha-344156-m02)     <rng model='virtio'>
	I0729 18:34:36.978500 1073226 main.go:141] libmachine: (ha-344156-m02)       <backend model='random'>/dev/random</backend>
	I0729 18:34:36.978528 1073226 main.go:141] libmachine: (ha-344156-m02)     </rng>
	I0729 18:34:36.978546 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978575 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978594 1073226 main.go:141] libmachine: (ha-344156-m02)   </devices>
	I0729 18:34:36.978604 1073226 main.go:141] libmachine: (ha-344156-m02) </domain>
	I0729 18:34:36.978614 1073226 main.go:141] libmachine: (ha-344156-m02) 
	I0729 18:34:36.985387 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:ad:7d:7c in network default
	I0729 18:34:36.985986 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring networks are active...
	I0729 18:34:36.986005 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:36.986742 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring network default is active
	I0729 18:34:36.987104 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring network mk-ha-344156 is active
	I0729 18:34:36.987489 1073226 main.go:141] libmachine: (ha-344156-m02) Getting domain xml...
	I0729 18:34:36.988159 1073226 main.go:141] libmachine: (ha-344156-m02) Creating domain...
	I0729 18:34:38.215213 1073226 main.go:141] libmachine: (ha-344156-m02) Waiting to get IP...
	I0729 18:34:38.216178 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.216692 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.216724 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.216663 1073621 retry.go:31] will retry after 192.743587ms: waiting for machine to come up
	I0729 18:34:38.411270 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.411730 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.411758 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.411691 1073621 retry.go:31] will retry after 325.808277ms: waiting for machine to come up
	I0729 18:34:38.739389 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.739828 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.739855 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.739792 1073621 retry.go:31] will retry after 424.809383ms: waiting for machine to come up
	I0729 18:34:39.165984 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:39.166362 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:39.166397 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:39.166326 1073621 retry.go:31] will retry after 605.465441ms: waiting for machine to come up
	I0729 18:34:39.773004 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:39.773530 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:39.773562 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:39.773460 1073621 retry.go:31] will retry after 703.376547ms: waiting for machine to come up
	I0729 18:34:40.478241 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:40.478719 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:40.478750 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:40.478660 1073621 retry.go:31] will retry after 880.682621ms: waiting for machine to come up
	I0729 18:34:41.360556 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:41.360958 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:41.360987 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:41.360915 1073621 retry.go:31] will retry after 995.983878ms: waiting for machine to come up
	I0729 18:34:42.358221 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:42.358641 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:42.358662 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:42.358599 1073621 retry.go:31] will retry after 1.181830881s: waiting for machine to come up
	I0729 18:34:43.541916 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:43.542421 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:43.542481 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:43.542305 1073621 retry.go:31] will retry after 1.736643534s: waiting for machine to come up
	I0729 18:34:45.281194 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:45.281674 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:45.281705 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:45.281608 1073621 retry.go:31] will retry after 2.275726311s: waiting for machine to come up
	I0729 18:34:47.558887 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:47.559306 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:47.559329 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:47.559257 1073621 retry.go:31] will retry after 2.748225942s: waiting for machine to come up
	I0729 18:34:50.308738 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:50.309228 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:50.309259 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:50.309176 1073621 retry.go:31] will retry after 2.570592713s: waiting for machine to come up
	I0729 18:34:52.882040 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:52.882452 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:52.882481 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:52.882399 1073621 retry.go:31] will retry after 4.385805767s: waiting for machine to come up
	I0729 18:34:57.269448 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.269863 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.269886 1073226 main.go:141] libmachine: (ha-344156-m02) Found IP for machine: 192.168.39.249
	I0729 18:34:57.269900 1073226 main.go:141] libmachine: (ha-344156-m02) Reserving static IP address...
	I0729 18:34:57.270257 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find host DHCP lease matching {name: "ha-344156-m02", mac: "52:54:00:99:a3:97", ip: "192.168.39.249"} in network mk-ha-344156
	I0729 18:34:57.341185 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Getting to WaitForSSH function...
	I0729 18:34:57.341216 1073226 main.go:141] libmachine: (ha-344156-m02) Reserved static IP address: 192.168.39.249
	I0729 18:34:57.341229 1073226 main.go:141] libmachine: (ha-344156-m02) Waiting for SSH to be available...
	I0729 18:34:57.343817 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.344238 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.344263 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.344302 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using SSH client type: external
	I0729 18:34:57.344318 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa (-rw-------)
	I0729 18:34:57.344433 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:34:57.344453 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | About to run SSH command:
	I0729 18:34:57.344471 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | exit 0
	I0729 18:34:57.467216 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 18:34:57.467478 1073226 main.go:141] libmachine: (ha-344156-m02) KVM machine creation complete!
	I0729 18:34:57.467831 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:57.468411 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:57.468614 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:57.468782 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:34:57.468798 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:34:57.470034 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:34:57.470047 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:34:57.470052 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:34:57.470058 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.472308 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.472707 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.472737 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.472874 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.473052 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.473226 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.473376 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.473544 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.473850 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.473870 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:34:57.574390 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:57.574414 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:34:57.574422 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.577605 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.578081 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.578112 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.578295 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.578503 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.578666 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.578882 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.579067 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.579283 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.579301 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:34:57.679800 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:34:57.679911 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:34:57.679927 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:34:57.679939 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.680207 1073226 buildroot.go:166] provisioning hostname "ha-344156-m02"
	I0729 18:34:57.680236 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.680413 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.683173 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.683473 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.683505 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.683632 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.683814 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.683983 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.684140 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.684304 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.684506 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.684522 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156-m02 && echo "ha-344156-m02" | sudo tee /etc/hostname
	I0729 18:34:57.801847 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156-m02
	
	I0729 18:34:57.801875 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.804836 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.805144 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.805174 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.805372 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.805580 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.805744 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.805899 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.806074 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.806247 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.806263 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:34:57.916595 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:57.916639 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:34:57.916666 1073226 buildroot.go:174] setting up certificates
	I0729 18:34:57.916682 1073226 provision.go:84] configureAuth start
	I0729 18:34:57.916700 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.916987 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:57.919519 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.919905 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.919934 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.920094 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.923248 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.923583 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.923611 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.923753 1073226 provision.go:143] copyHostCerts
	I0729 18:34:57.923793 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:57.923826 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:34:57.923835 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:57.923893 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:34:57.923963 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:57.923981 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:34:57.923987 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:57.924010 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:34:57.924061 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:57.924078 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:34:57.924084 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:57.924106 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:34:57.924151 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156-m02 san=[127.0.0.1 192.168.39.249 ha-344156-m02 localhost minikube]
	I0729 18:34:58.007732 1073226 provision.go:177] copyRemoteCerts
	I0729 18:34:58.007794 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:34:58.007818 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.010265 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.010569 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.010600 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.010743 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.010919 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.011057 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.011162 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.093105 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:34:58.093165 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:34:58.120080 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:34:58.120142 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:34:58.142767 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:34:58.142841 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:34:58.167082 1073226 provision.go:87] duration metric: took 250.381441ms to configureAuth
	I0729 18:34:58.167113 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:34:58.167317 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:58.167404 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.170147 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.170599 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.170629 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.170790 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.170976 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.171123 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.171278 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.171436 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:58.171657 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:58.171677 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:34:58.450547 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:34:58.450578 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:34:58.450594 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetURL
	I0729 18:34:58.451880 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using libvirt version 6000000
	I0729 18:34:58.453891 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.454185 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.454209 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.454431 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:34:58.454446 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:34:58.454455 1073226 client.go:171] duration metric: took 21.843478371s to LocalClient.Create
	I0729 18:34:58.454477 1073226 start.go:167] duration metric: took 21.843534449s to libmachine.API.Create "ha-344156"
	I0729 18:34:58.454487 1073226 start.go:293] postStartSetup for "ha-344156-m02" (driver="kvm2")
	I0729 18:34:58.454521 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:34:58.454545 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.454878 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:34:58.454912 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.457207 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.457533 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.457561 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.457762 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.457941 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.458086 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.458217 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.536704 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:34:58.540832 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:34:58.540865 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:34:58.540932 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:34:58.541027 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:34:58.541042 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:34:58.541164 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:34:58.550386 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:58.576937 1073226 start.go:296] duration metric: took 122.422943ms for postStartSetup
	I0729 18:34:58.576983 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:58.577572 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:58.580120 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.580438 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.580458 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.580741 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:58.580948 1073226 start.go:128] duration metric: took 21.98978895s to createHost
	I0729 18:34:58.580973 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.582978 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.583259 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.583289 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.583392 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.583573 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.583741 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.583904 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.584047 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:58.584220 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:58.584231 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:34:58.683224 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278098.640970006
	
	I0729 18:34:58.683248 1073226 fix.go:216] guest clock: 1722278098.640970006
	I0729 18:34:58.683257 1073226 fix.go:229] Guest: 2024-07-29 18:34:58.640970006 +0000 UTC Remote: 2024-07-29 18:34:58.580960916 +0000 UTC m=+73.658710151 (delta=60.00909ms)
	I0729 18:34:58.683277 1073226 fix.go:200] guest clock delta is within tolerance: 60.00909ms
	I0729 18:34:58.683284 1073226 start.go:83] releasing machines lock for "ha-344156-m02", held for 22.092255822s
	I0729 18:34:58.683307 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.683587 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:58.685992 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.686308 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.686328 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.688553 1073226 out.go:177] * Found network options:
	I0729 18:34:58.689750 1073226 out.go:177]   - NO_PROXY=192.168.39.225
	W0729 18:34:58.690882 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:34:58.690931 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691396 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691579 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691685 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:34:58.691733 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	W0729 18:34:58.691808 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:34:58.691871 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:34:58.691888 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.694434 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694727 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694770 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.694795 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694952 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.695143 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.695171 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.695191 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.695329 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.695337 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.695502 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.695521 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.695656 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.695807 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:59.219465 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:34:59.225441 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:34:59.225515 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:34:59.241148 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:34:59.241169 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:34:59.241232 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:34:59.256557 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:34:59.269484 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:34:59.269540 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:34:59.282006 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:34:59.294554 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:34:59.400871 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:34:59.564683 1073226 docker.go:233] disabling docker service ...
	I0729 18:34:59.564767 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:34:59.579222 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:34:59.591663 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:34:59.704596 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:34:59.821936 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:34:59.835475 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:34:59.853364 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:34:59.853431 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.863444 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:34:59.863517 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.873630 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.883352 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.893186 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:34:59.903308 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.913184 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.929630 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.939504 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:34:59.948482 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:34:59.948533 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:34:59.961412 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:34:59.970429 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:00.077766 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:35:00.211541 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:35:00.211641 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:35:00.216910 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:35:00.216973 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:35:00.221022 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:35:00.261287 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:35:00.261381 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:35:00.289328 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:35:00.319680 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:35:00.321020 1073226 out.go:177]   - env NO_PROXY=192.168.39.225
	I0729 18:35:00.322170 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:35:00.324901 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:35:00.325237 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:35:00.325266 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:35:00.325473 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:35:00.329978 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:35:00.342977 1073226 mustload.go:65] Loading cluster: ha-344156
	I0729 18:35:00.343196 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:00.343471 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:00.343503 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:00.358513 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0729 18:35:00.359020 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:00.359515 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:00.359539 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:00.359846 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:00.360066 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:35:00.361930 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:35:00.362253 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:00.362280 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:00.377532 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0729 18:35:00.377996 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:00.378492 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:00.378515 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:00.378843 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:00.379084 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:35:00.379273 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.249
	I0729 18:35:00.379286 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:35:00.379301 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.379451 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:35:00.379491 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:35:00.379500 1073226 certs.go:256] generating profile certs ...
	I0729 18:35:00.379570 1073226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:35:00.379593 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660
	I0729 18:35:00.379610 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.254]
	I0729 18:35:00.774632 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 ...
	I0729 18:35:00.774668 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660: {Name:mka4379faa9808b62524de326fea26654f0e9584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.774866 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660 ...
	I0729 18:35:00.774890 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660: {Name:mk873a2dbb09106f128745397e9a40b735c7faaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.774974 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:35:00.775111 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:35:00.775243 1073226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:35:00.775260 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:35:00.775274 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:35:00.775287 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:35:00.775299 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:35:00.775312 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:35:00.775324 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:35:00.775336 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:35:00.775347 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:35:00.775395 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:35:00.775431 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:35:00.775440 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:35:00.775460 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:35:00.775486 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:35:00.775509 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:35:00.775546 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:35:00.775570 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:00.775584 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:35:00.775596 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:35:00.775631 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:35:00.778502 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:00.778934 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:35:00.778966 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:00.779160 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:35:00.779424 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:35:00.779604 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:35:00.779753 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:35:00.859326 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 18:35:00.865414 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 18:35:00.880001 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 18:35:00.885188 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 18:35:00.899058 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 18:35:00.904149 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 18:35:00.916897 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 18:35:00.921712 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 18:35:00.933138 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 18:35:00.937472 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 18:35:00.952585 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 18:35:00.960661 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 18:35:00.972033 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:35:00.998207 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:35:01.023652 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:35:01.048448 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:35:01.072810 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 18:35:01.097863 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:35:01.122937 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:35:01.148356 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:35:01.173581 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:35:01.198244 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:35:01.222479 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:35:01.247656 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 18:35:01.264416 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 18:35:01.280733 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 18:35:01.297192 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 18:35:01.314487 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 18:35:01.331413 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 18:35:01.348616 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 18:35:01.365867 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:35:01.372062 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:35:01.383291 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.388060 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.388146 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.394248 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:35:01.406105 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:35:01.417866 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.422767 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.422840 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.428728 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:35:01.439670 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:35:01.450764 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.455529 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.455604 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.461465 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:35:01.472517 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:35:01.476780 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:35:01.476847 1073226 kubeadm.go:934] updating node {m02 192.168.39.249 8443 v1.30.3 crio true true} ...
	I0729 18:35:01.476979 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:35:01.477014 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:35:01.477057 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:35:01.495211 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:35:01.495314 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:35:01.495379 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:35:01.505858 1073226 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 18:35:01.505928 1073226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 18:35:01.515830 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 18:35:01.515865 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:35:01.515931 1073226 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 18:35:01.515944 1073226 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 18:35:01.515955 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:35:01.520622 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 18:35:01.520652 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 18:35:02.120951 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:35:02.121045 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:35:02.126785 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 18:35:02.126826 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 18:35:02.540965 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:35:02.557247 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:35:02.557381 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:35:02.561896 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 18:35:02.561940 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 18:35:02.986983 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 18:35:02.997179 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:35:03.016053 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:35:03.033710 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:35:03.050569 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:35:03.054444 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:35:03.068192 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:03.189566 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:35:03.206700 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:35:03.207246 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:03.207305 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:03.223238 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0729 18:35:03.223774 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:03.224246 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:03.224272 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:03.224584 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:03.224749 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:35:03.224900 1073226 start.go:317] joinCluster: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:35:03.225007 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 18:35:03.225026 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:35:03.227742 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:03.228194 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:35:03.228222 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:03.228394 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:35:03.228577 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:35:03.228726 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:35:03.228880 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:35:03.396212 1073226 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:03.396278 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 51hnye.n9le5n5q8s277ze6 --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m02 --control-plane --apiserver-advertise-address=192.168.39.249 --apiserver-bind-port=8443"
	I0729 18:35:26.187581 1073226 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 51hnye.n9le5n5q8s277ze6 --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m02 --control-plane --apiserver-advertise-address=192.168.39.249 --apiserver-bind-port=8443": (22.791267702s)
	I0729 18:35:26.187627 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 18:35:26.767659 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156-m02 minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=false
	I0729 18:35:26.897710 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344156-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 18:35:27.031283 1073226 start.go:319] duration metric: took 23.806377074s to joinCluster
	I0729 18:35:27.031379 1073226 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:27.031691 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:27.033027 1073226 out.go:177] * Verifying Kubernetes components...
	I0729 18:35:27.034317 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:27.279073 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:35:27.342407 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:35:27.342687 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 18:35:27.342759 1073226 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.225:8443
	I0729 18:35:27.343031 1073226 node_ready.go:35] waiting up to 6m0s for node "ha-344156-m02" to be "Ready" ...
	I0729 18:35:27.343138 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:27.343148 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:27.343158 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:27.343163 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:27.360994 1073226 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0729 18:35:27.843923 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:27.843956 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:27.843969 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:27.843974 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:27.847623 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:28.343997 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:28.344026 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:28.344040 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:28.344045 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:28.352801 1073226 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 18:35:28.844215 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:28.844247 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:28.844259 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:28.844266 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:28.850339 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:35:29.343591 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:29.343618 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:29.343630 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:29.343637 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:29.357995 1073226 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0729 18:35:29.358690 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:29.843971 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:29.844002 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:29.844014 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:29.844022 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:29.847440 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:30.344219 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:30.344257 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:30.344278 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:30.344283 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:30.348051 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:30.844120 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:30.844148 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:30.844159 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:30.844165 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:30.847471 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:31.343614 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:31.343641 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:31.343653 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:31.343659 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:31.346467 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:31.844235 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:31.844265 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:31.844274 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:31.844277 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:31.848190 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:31.848997 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:32.343292 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:32.343316 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:32.343325 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:32.343328 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:32.346412 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:32.843249 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:32.843273 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:32.843281 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:32.843285 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:32.846391 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:33.344047 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:33.344071 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:33.344079 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:33.344083 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:33.347588 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:33.843957 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:33.843981 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:33.844021 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:33.844038 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:33.847199 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:34.344104 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:34.344129 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:34.344138 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:34.344141 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:34.347276 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:34.347888 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:34.844224 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:34.844251 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:34.844263 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:34.844268 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:34.849189 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:35.343947 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:35.343972 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:35.343981 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:35.343985 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:35.347216 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:35.844337 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:35.844367 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:35.844379 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:35.844385 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:35.847686 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.343620 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:36.343645 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:36.343653 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:36.343657 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:36.346934 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.844112 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:36.844135 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:36.844143 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:36.844147 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:36.847798 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.848524 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:37.343666 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:37.343690 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:37.343724 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:37.343731 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:37.346376 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:37.843326 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:37.843359 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:37.843368 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:37.843375 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:37.846754 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:38.343602 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:38.343628 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:38.343637 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:38.343641 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:38.347271 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:38.844070 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:38.844092 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:38.844100 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:38.844104 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:38.847449 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:39.343623 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:39.343650 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:39.343661 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:39.343665 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:39.347688 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:39.348544 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:39.844024 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:39.844051 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:39.844060 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:39.844064 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:39.850989 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:35:40.343820 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.343852 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.343860 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.343866 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.347050 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:40.844142 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.844162 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.844170 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.844176 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.846892 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.847741 1073226 node_ready.go:49] node "ha-344156-m02" has status "Ready":"True"
	I0729 18:35:40.847770 1073226 node_ready.go:38] duration metric: took 13.504712108s for node "ha-344156-m02" to be "Ready" ...
	I0729 18:35:40.847783 1073226 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:35:40.847869 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:40.847881 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.847892 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.847903 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.852480 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:40.859042 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.859112 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5slmg
	I0729 18:35:40.859120 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.859127 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.859133 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.861457 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.862134 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.862149 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.862156 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.862160 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.864509 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.865097 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.865114 1073226 pod_ready.go:81] duration metric: took 6.050845ms for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.865123 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.865167 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h5h7v
	I0729 18:35:40.865175 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.865182 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.865187 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.867315 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.868152 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.868169 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.868178 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.868182 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.870428 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.870932 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.870953 1073226 pod_ready.go:81] duration metric: took 5.82246ms for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.870963 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.871021 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156
	I0729 18:35:40.871029 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.871035 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.871039 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.872985 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:35:40.873632 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.873649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.873659 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.873664 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.875725 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.876220 1073226 pod_ready.go:92] pod "etcd-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.876240 1073226 pod_ready.go:81] duration metric: took 5.266086ms for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.876250 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.876312 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m02
	I0729 18:35:40.876322 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.876340 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.876350 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.878425 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.878911 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.878925 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.878932 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.878936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.880783 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:35:40.881347 1073226 pod_ready.go:92] pod "etcd-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.881367 1073226 pod_ready.go:81] duration metric: took 5.106573ms for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.881384 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.044787 1073226 request.go:629] Waited for 163.326535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:35:41.044902 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:35:41.044914 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.044925 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.044936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.048287 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.244564 1073226 request.go:629] Waited for 195.455065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:41.244639 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:41.244645 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.244654 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.244663 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.247467 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:41.248033 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:41.248054 1073226 pod_ready.go:81] duration metric: took 366.658924ms for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.248063 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.444227 1073226 request.go:629] Waited for 196.048674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:35:41.444340 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:35:41.444355 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.444366 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.444373 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.448042 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.645125 1073226 request.go:629] Waited for 196.090606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:41.645227 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:41.645244 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.645252 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.645257 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.648585 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.649210 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:41.649232 1073226 pod_ready.go:81] duration metric: took 401.16141ms for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.649244 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.845236 1073226 request.go:629] Waited for 195.912886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:35:41.845322 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:35:41.845329 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.845340 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.845352 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.848685 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.044838 1073226 request.go:629] Waited for 195.409222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:42.044932 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:42.044941 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.044953 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.044961 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.048095 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.048836 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:42.048854 1073226 pod_ready.go:81] duration metric: took 399.601811ms for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:42.048864 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:42.244952 1073226 request.go:629] Waited for 196.01651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.245027 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.245035 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.245045 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.245077 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.247990 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:42.445074 1073226 request.go:629] Waited for 196.360333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.445158 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.445171 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.445181 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.445187 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.448481 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.644285 1073226 request.go:629] Waited for 95.207061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.644352 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.644358 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.644375 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.644381 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.647859 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.844975 1073226 request.go:629] Waited for 196.404374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.845055 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.845062 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.845072 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.845081 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.848369 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:43.049049 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:43.049072 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.049080 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.049085 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.052032 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.245238 1073226 request.go:629] Waited for 192.410971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.245341 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.245350 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.245357 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.245365 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.248043 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.248780 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:43.248800 1073226 pod_ready.go:81] duration metric: took 1.19992974s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.248813 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.445071 1073226 request.go:629] Waited for 196.164201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:35:43.445130 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:35:43.445136 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.445143 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.445149 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.448125 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.644916 1073226 request.go:629] Waited for 196.090624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.644977 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.644984 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.644995 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.645005 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.648537 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:43.648973 1073226 pod_ready.go:92] pod "kube-proxy-4p5r9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:43.648990 1073226 pod_ready.go:81] duration metric: took 400.168446ms for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.648999 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.845152 1073226 request.go:629] Waited for 196.062448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:35:43.845216 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:35:43.845223 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.845233 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.845238 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.848381 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.044571 1073226 request.go:629] Waited for 195.363564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.044665 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.044670 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.044678 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.044683 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.048099 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.048940 1073226 pod_ready.go:92] pod "kube-proxy-gp282" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.048959 1073226 pod_ready.go:81] duration metric: took 399.953692ms for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.048969 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.245199 1073226 request.go:629] Waited for 196.135922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:35:44.245280 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:35:44.245289 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.245298 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.245303 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.248683 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.444676 1073226 request.go:629] Waited for 195.372268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.444739 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.444744 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.444753 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.444757 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.447828 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.448490 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.448512 1073226 pod_ready.go:81] duration metric: took 399.537008ms for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.448523 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.644604 1073226 request.go:629] Waited for 195.98334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:35:44.644666 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:35:44.644673 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.644683 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.644689 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.648755 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:44.844787 1073226 request.go:629] Waited for 195.371689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:44.844876 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:44.844884 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.844919 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.844936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.848291 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.848940 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.848962 1073226 pod_ready.go:81] duration metric: took 400.431043ms for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.848976 1073226 pod_ready.go:38] duration metric: took 4.001172836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:35:44.848999 1073226 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:35:44.849071 1073226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:35:44.865607 1073226 api_server.go:72] duration metric: took 17.834187388s to wait for apiserver process to appear ...
	I0729 18:35:44.865631 1073226 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:35:44.865654 1073226 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0729 18:35:44.870139 1073226 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0729 18:35:44.870279 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/version
	I0729 18:35:44.870292 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.870303 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.870311 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.871142 1073226 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 18:35:44.871254 1073226 api_server.go:141] control plane version: v1.30.3
	I0729 18:35:44.871270 1073226 api_server.go:131] duration metric: took 5.634016ms to wait for apiserver health ...
	I0729 18:35:44.871278 1073226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:35:45.044849 1073226 request.go:629] Waited for 173.431592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.044908 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.044913 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.044921 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.044925 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.050279 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:35:45.054829 1073226 system_pods.go:59] 17 kube-system pods found
	I0729 18:35:45.054873 1073226 system_pods.go:61] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:35:45.054880 1073226 system_pods.go:61] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:35:45.054886 1073226 system_pods.go:61] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:35:45.054892 1073226 system_pods.go:61] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:35:45.054896 1073226 system_pods.go:61] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:35:45.054903 1073226 system_pods.go:61] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:35:45.054906 1073226 system_pods.go:61] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:35:45.054913 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:35:45.054916 1073226 system_pods.go:61] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:35:45.054920 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:35:45.054924 1073226 system_pods.go:61] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:35:45.054930 1073226 system_pods.go:61] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:35:45.054933 1073226 system_pods.go:61] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:35:45.054939 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:35:45.054942 1073226 system_pods.go:61] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:35:45.054945 1073226 system_pods.go:61] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:35:45.054948 1073226 system_pods.go:61] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:35:45.054954 1073226 system_pods.go:74] duration metric: took 183.670778ms to wait for pod list to return data ...
	I0729 18:35:45.054964 1073226 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:35:45.244256 1073226 request.go:629] Waited for 189.211461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:35:45.244362 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:35:45.244370 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.244382 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.244390 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.247495 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:45.247800 1073226 default_sa.go:45] found service account: "default"
	I0729 18:35:45.247820 1073226 default_sa.go:55] duration metric: took 192.849189ms for default service account to be created ...
	I0729 18:35:45.247832 1073226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:35:45.444710 1073226 request.go:629] Waited for 196.788818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.444776 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.444781 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.444789 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.444793 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.450315 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:35:45.454802 1073226 system_pods.go:86] 17 kube-system pods found
	I0729 18:35:45.454833 1073226 system_pods.go:89] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:35:45.454841 1073226 system_pods.go:89] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:35:45.454862 1073226 system_pods.go:89] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:35:45.454868 1073226 system_pods.go:89] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:35:45.454874 1073226 system_pods.go:89] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:35:45.454880 1073226 system_pods.go:89] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:35:45.454887 1073226 system_pods.go:89] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:35:45.454894 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:35:45.454905 1073226 system_pods.go:89] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:35:45.454917 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:35:45.454924 1073226 system_pods.go:89] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:35:45.454931 1073226 system_pods.go:89] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:35:45.454941 1073226 system_pods.go:89] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:35:45.454951 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:35:45.454959 1073226 system_pods.go:89] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:35:45.454964 1073226 system_pods.go:89] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:35:45.454970 1073226 system_pods.go:89] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:35:45.454981 1073226 system_pods.go:126] duration metric: took 207.141096ms to wait for k8s-apps to be running ...
	I0729 18:35:45.454994 1073226 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:35:45.455050 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:35:45.470678 1073226 system_svc.go:56] duration metric: took 15.673314ms WaitForService to wait for kubelet
	I0729 18:35:45.470713 1073226 kubeadm.go:582] duration metric: took 18.439296601s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:35:45.470743 1073226 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:35:45.645135 1073226 request.go:629] Waited for 174.314253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes
	I0729 18:35:45.645218 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes
	I0729 18:35:45.645224 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.645232 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.645237 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.648752 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:45.649446 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:35:45.649469 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:35:45.649483 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:35:45.649487 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:35:45.649491 1073226 node_conditions.go:105] duration metric: took 178.742302ms to run NodePressure ...
	I0729 18:35:45.649505 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:35:45.649535 1073226 start.go:255] writing updated cluster config ...
	I0729 18:35:45.651570 1073226 out.go:177] 
	I0729 18:35:45.653014 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:45.653099 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:35:45.657777 1073226 out.go:177] * Starting "ha-344156-m03" control-plane node in "ha-344156" cluster
	I0729 18:35:45.658705 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:35:45.658726 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:35:45.658821 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:35:45.658832 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:35:45.658936 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:35:45.659094 1073226 start.go:360] acquireMachinesLock for ha-344156-m03: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:35:45.659140 1073226 start.go:364] duration metric: took 26.086µs to acquireMachinesLock for "ha-344156-m03"
	I0729 18:35:45.659164 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:45.659253 1073226 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 18:35:45.660493 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:35:45.660595 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:45.660635 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:45.675860 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0729 18:35:45.676276 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:45.676811 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:45.676834 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:45.677106 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:45.677277 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:35:45.677391 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:35:45.677523 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:35:45.677552 1073226 client.go:168] LocalClient.Create starting
	I0729 18:35:45.677583 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:35:45.677621 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:35:45.677636 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:35:45.677689 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:35:45.677706 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:35:45.677716 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:35:45.677730 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:35:45.677738 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .PreCreateCheck
	I0729 18:35:45.677911 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:35:45.678294 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:35:45.678308 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .Create
	I0729 18:35:45.678425 1073226 main.go:141] libmachine: (ha-344156-m03) Creating KVM machine...
	I0729 18:35:45.679748 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found existing default KVM network
	I0729 18:35:45.679836 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found existing private KVM network mk-ha-344156
	I0729 18:35:45.679964 1073226 main.go:141] libmachine: (ha-344156-m03) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 ...
	I0729 18:35:45.679987 1073226 main.go:141] libmachine: (ha-344156-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:35:45.680076 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:45.679964 1073996 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:35:45.680223 1073226 main.go:141] libmachine: (ha-344156-m03) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:35:45.953826 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:45.953719 1073996 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa...
	I0729 18:35:46.074158 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:46.074026 1073996 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/ha-344156-m03.rawdisk...
	I0729 18:35:46.074200 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Writing magic tar header
	I0729 18:35:46.074215 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Writing SSH key tar header
	I0729 18:35:46.074227 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:46.074138 1073996 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 ...
	I0729 18:35:46.074244 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03
	I0729 18:35:46.074344 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 (perms=drwx------)
	I0729 18:35:46.074374 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:35:46.074389 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:35:46.074404 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:35:46.074414 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:35:46.074426 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:35:46.074434 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:35:46.074442 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:35:46.074450 1073226 main.go:141] libmachine: (ha-344156-m03) Creating domain...
	I0729 18:35:46.074461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:35:46.074469 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:35:46.074477 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:35:46.074488 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home
	I0729 18:35:46.074520 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Skipping /home - not owner
	I0729 18:35:46.075370 1073226 main.go:141] libmachine: (ha-344156-m03) define libvirt domain using xml: 
	I0729 18:35:46.075390 1073226 main.go:141] libmachine: (ha-344156-m03) <domain type='kvm'>
	I0729 18:35:46.075396 1073226 main.go:141] libmachine: (ha-344156-m03)   <name>ha-344156-m03</name>
	I0729 18:35:46.075402 1073226 main.go:141] libmachine: (ha-344156-m03)   <memory unit='MiB'>2200</memory>
	I0729 18:35:46.075410 1073226 main.go:141] libmachine: (ha-344156-m03)   <vcpu>2</vcpu>
	I0729 18:35:46.075421 1073226 main.go:141] libmachine: (ha-344156-m03)   <features>
	I0729 18:35:46.075429 1073226 main.go:141] libmachine: (ha-344156-m03)     <acpi/>
	I0729 18:35:46.075435 1073226 main.go:141] libmachine: (ha-344156-m03)     <apic/>
	I0729 18:35:46.075442 1073226 main.go:141] libmachine: (ha-344156-m03)     <pae/>
	I0729 18:35:46.075448 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075456 1073226 main.go:141] libmachine: (ha-344156-m03)   </features>
	I0729 18:35:46.075464 1073226 main.go:141] libmachine: (ha-344156-m03)   <cpu mode='host-passthrough'>
	I0729 18:35:46.075471 1073226 main.go:141] libmachine: (ha-344156-m03)   
	I0729 18:35:46.075477 1073226 main.go:141] libmachine: (ha-344156-m03)   </cpu>
	I0729 18:35:46.075483 1073226 main.go:141] libmachine: (ha-344156-m03)   <os>
	I0729 18:35:46.075494 1073226 main.go:141] libmachine: (ha-344156-m03)     <type>hvm</type>
	I0729 18:35:46.075503 1073226 main.go:141] libmachine: (ha-344156-m03)     <boot dev='cdrom'/>
	I0729 18:35:46.075512 1073226 main.go:141] libmachine: (ha-344156-m03)     <boot dev='hd'/>
	I0729 18:35:46.075525 1073226 main.go:141] libmachine: (ha-344156-m03)     <bootmenu enable='no'/>
	I0729 18:35:46.075535 1073226 main.go:141] libmachine: (ha-344156-m03)   </os>
	I0729 18:35:46.075540 1073226 main.go:141] libmachine: (ha-344156-m03)   <devices>
	I0729 18:35:46.075556 1073226 main.go:141] libmachine: (ha-344156-m03)     <disk type='file' device='cdrom'>
	I0729 18:35:46.075566 1073226 main.go:141] libmachine: (ha-344156-m03)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/boot2docker.iso'/>
	I0729 18:35:46.075596 1073226 main.go:141] libmachine: (ha-344156-m03)       <target dev='hdc' bus='scsi'/>
	I0729 18:35:46.075618 1073226 main.go:141] libmachine: (ha-344156-m03)       <readonly/>
	I0729 18:35:46.075629 1073226 main.go:141] libmachine: (ha-344156-m03)     </disk>
	I0729 18:35:46.075638 1073226 main.go:141] libmachine: (ha-344156-m03)     <disk type='file' device='disk'>
	I0729 18:35:46.075654 1073226 main.go:141] libmachine: (ha-344156-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:35:46.075666 1073226 main.go:141] libmachine: (ha-344156-m03)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/ha-344156-m03.rawdisk'/>
	I0729 18:35:46.075678 1073226 main.go:141] libmachine: (ha-344156-m03)       <target dev='hda' bus='virtio'/>
	I0729 18:35:46.075685 1073226 main.go:141] libmachine: (ha-344156-m03)     </disk>
	I0729 18:35:46.075706 1073226 main.go:141] libmachine: (ha-344156-m03)     <interface type='network'>
	I0729 18:35:46.075722 1073226 main.go:141] libmachine: (ha-344156-m03)       <source network='mk-ha-344156'/>
	I0729 18:35:46.075738 1073226 main.go:141] libmachine: (ha-344156-m03)       <model type='virtio'/>
	I0729 18:35:46.075754 1073226 main.go:141] libmachine: (ha-344156-m03)     </interface>
	I0729 18:35:46.075764 1073226 main.go:141] libmachine: (ha-344156-m03)     <interface type='network'>
	I0729 18:35:46.075775 1073226 main.go:141] libmachine: (ha-344156-m03)       <source network='default'/>
	I0729 18:35:46.075783 1073226 main.go:141] libmachine: (ha-344156-m03)       <model type='virtio'/>
	I0729 18:35:46.075793 1073226 main.go:141] libmachine: (ha-344156-m03)     </interface>
	I0729 18:35:46.075802 1073226 main.go:141] libmachine: (ha-344156-m03)     <serial type='pty'>
	I0729 18:35:46.075812 1073226 main.go:141] libmachine: (ha-344156-m03)       <target port='0'/>
	I0729 18:35:46.075821 1073226 main.go:141] libmachine: (ha-344156-m03)     </serial>
	I0729 18:35:46.075831 1073226 main.go:141] libmachine: (ha-344156-m03)     <console type='pty'>
	I0729 18:35:46.075853 1073226 main.go:141] libmachine: (ha-344156-m03)       <target type='serial' port='0'/>
	I0729 18:35:46.075869 1073226 main.go:141] libmachine: (ha-344156-m03)     </console>
	I0729 18:35:46.075883 1073226 main.go:141] libmachine: (ha-344156-m03)     <rng model='virtio'>
	I0729 18:35:46.075895 1073226 main.go:141] libmachine: (ha-344156-m03)       <backend model='random'>/dev/random</backend>
	I0729 18:35:46.075903 1073226 main.go:141] libmachine: (ha-344156-m03)     </rng>
	I0729 18:35:46.075913 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075921 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075929 1073226 main.go:141] libmachine: (ha-344156-m03)   </devices>
	I0729 18:35:46.075946 1073226 main.go:141] libmachine: (ha-344156-m03) </domain>
	I0729 18:35:46.075961 1073226 main.go:141] libmachine: (ha-344156-m03) 
	I0729 18:35:46.082480 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:9c:10:53 in network default
	I0729 18:35:46.083122 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring networks are active...
	I0729 18:35:46.083141 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:46.083953 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring network default is active
	I0729 18:35:46.084255 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring network mk-ha-344156 is active
	I0729 18:35:46.084607 1073226 main.go:141] libmachine: (ha-344156-m03) Getting domain xml...
	I0729 18:35:46.085275 1073226 main.go:141] libmachine: (ha-344156-m03) Creating domain...
	I0729 18:35:47.305641 1073226 main.go:141] libmachine: (ha-344156-m03) Waiting to get IP...
	I0729 18:35:47.306359 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.306773 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.306809 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.306750 1073996 retry.go:31] will retry after 290.792301ms: waiting for machine to come up
	I0729 18:35:47.599494 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.599929 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.599979 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.599871 1073996 retry.go:31] will retry after 323.451262ms: waiting for machine to come up
	I0729 18:35:47.925368 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.925857 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.925884 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.925816 1073996 retry.go:31] will retry after 397.336676ms: waiting for machine to come up
	I0729 18:35:48.325126 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:48.325651 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:48.325681 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:48.325604 1073996 retry.go:31] will retry after 378.992466ms: waiting for machine to come up
	I0729 18:35:48.706215 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:48.706597 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:48.706649 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:48.706565 1073996 retry.go:31] will retry after 709.195134ms: waiting for machine to come up
	I0729 18:35:49.417593 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:49.418035 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:49.418061 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:49.417987 1073996 retry.go:31] will retry after 695.222412ms: waiting for machine to come up
	I0729 18:35:50.114890 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:50.115433 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:50.115489 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:50.115401 1073996 retry.go:31] will retry after 1.162350407s: waiting for machine to come up
	I0729 18:35:51.278969 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:51.279365 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:51.279395 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:51.279308 1073996 retry.go:31] will retry after 1.192041574s: waiting for machine to come up
	I0729 18:35:52.473632 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:52.474049 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:52.474073 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:52.474007 1073996 retry.go:31] will retry after 1.569107876s: waiting for machine to come up
	I0729 18:35:54.045735 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:54.046153 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:54.046178 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:54.046098 1073996 retry.go:31] will retry after 1.434983344s: waiting for machine to come up
	I0729 18:35:55.483034 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:55.483461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:55.483487 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:55.483412 1073996 retry.go:31] will retry after 2.844985256s: waiting for machine to come up
	I0729 18:35:58.331917 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:58.332323 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:58.332346 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:58.332285 1073996 retry.go:31] will retry after 2.425853936s: waiting for machine to come up
	I0729 18:36:00.759858 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:00.760325 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:36:00.760390 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:36:00.760321 1073996 retry.go:31] will retry after 3.160933834s: waiting for machine to come up
	I0729 18:36:03.924027 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:03.924524 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:36:03.924557 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:36:03.924459 1073996 retry.go:31] will retry after 5.464362473s: waiting for machine to come up
	I0729 18:36:09.392030 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.392593 1073226 main.go:141] libmachine: (ha-344156-m03) Found IP for machine: 192.168.39.148
	I0729 18:36:09.392627 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has current primary IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.392636 1073226 main.go:141] libmachine: (ha-344156-m03) Reserving static IP address...
	I0729 18:36:09.393026 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find host DHCP lease matching {name: "ha-344156-m03", mac: "52:54:00:49:5c:73", ip: "192.168.39.148"} in network mk-ha-344156
	I0729 18:36:09.465204 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Getting to WaitForSSH function...
	I0729 18:36:09.465241 1073226 main.go:141] libmachine: (ha-344156-m03) Reserved static IP address: 192.168.39.148
	I0729 18:36:09.465292 1073226 main.go:141] libmachine: (ha-344156-m03) Waiting for SSH to be available...
	I0729 18:36:09.468097 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.468632 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.468659 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.468820 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using SSH client type: external
	I0729 18:36:09.468844 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa (-rw-------)
	I0729 18:36:09.468869 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:36:09.468880 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | About to run SSH command:
	I0729 18:36:09.468901 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | exit 0
	I0729 18:36:09.590954 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 18:36:09.591259 1073226 main.go:141] libmachine: (ha-344156-m03) KVM machine creation complete!
	I0729 18:36:09.591534 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:36:09.592111 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:09.592340 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:09.592485 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:36:09.592495 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:36:09.593696 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:36:09.593707 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:36:09.593713 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:36:09.593719 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.595771 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.596139 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.596170 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.596321 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.596472 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.596613 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.596753 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.596893 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.597152 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.597166 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:36:09.698186 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:36:09.698210 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:36:09.698220 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.701105 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.701524 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.701553 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.701787 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.702005 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.702201 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.702371 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.702553 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.702766 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.702782 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:36:09.811747 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:36:09.811828 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:36:09.811838 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:36:09.811850 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:09.812116 1073226 buildroot.go:166] provisioning hostname "ha-344156-m03"
	I0729 18:36:09.812151 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:09.812379 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.815003 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.815396 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.815418 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.815610 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.815800 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.815959 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.816102 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.816247 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.816425 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.816437 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156-m03 && echo "ha-344156-m03" | sudo tee /etc/hostname
	I0729 18:36:09.935241 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156-m03
	
	I0729 18:36:09.935276 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.937946 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.938321 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.938354 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.938619 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.938833 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.939058 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.939246 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.939470 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.939710 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.939736 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:36:10.051411 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:36:10.051453 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:36:10.051489 1073226 buildroot.go:174] setting up certificates
	I0729 18:36:10.051502 1073226 provision.go:84] configureAuth start
	I0729 18:36:10.051511 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:10.051848 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.054684 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.055016 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.055054 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.055217 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.057137 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.057527 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.057556 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.057680 1073226 provision.go:143] copyHostCerts
	I0729 18:36:10.057707 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:36:10.057744 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:36:10.057753 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:36:10.057813 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:36:10.057889 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:36:10.057906 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:36:10.057913 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:36:10.057936 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:36:10.057983 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:36:10.058002 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:36:10.058008 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:36:10.058028 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:36:10.058076 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156-m03 san=[127.0.0.1 192.168.39.148 ha-344156-m03 localhost minikube]
	I0729 18:36:10.121659 1073226 provision.go:177] copyRemoteCerts
	I0729 18:36:10.121734 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:36:10.121766 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.124568 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.124870 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.124901 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.125037 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.125238 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.125421 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.125560 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.206092 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:36:10.206175 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:36:10.230639 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:36:10.230705 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:36:10.255830 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:36:10.255913 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:36:10.278579 1073226 provision.go:87] duration metric: took 227.063106ms to configureAuth
	I0729 18:36:10.278610 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:36:10.278843 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:10.278959 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.281588 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.281999 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.282031 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.282252 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.282454 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.282599 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.282721 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.282898 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:10.283078 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:10.283093 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:36:10.565817 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:36:10.565847 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:36:10.565860 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetURL
	I0729 18:36:10.567278 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using libvirt version 6000000
	I0729 18:36:10.569461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.569803 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.569828 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.569973 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:36:10.569988 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:36:10.570002 1073226 client.go:171] duration metric: took 24.892435886s to LocalClient.Create
	I0729 18:36:10.570028 1073226 start.go:167] duration metric: took 24.89250719s to libmachine.API.Create "ha-344156"
	I0729 18:36:10.570039 1073226 start.go:293] postStartSetup for "ha-344156-m03" (driver="kvm2")
	I0729 18:36:10.570048 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:36:10.570062 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.570303 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:36:10.570338 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.572305 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.572601 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.572628 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.572765 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.572954 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.573107 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.573249 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.654173 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:36:10.658770 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:36:10.658797 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:36:10.658889 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:36:10.658983 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:36:10.658999 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:36:10.659116 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:36:10.668794 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:36:10.693362 1073226 start.go:296] duration metric: took 123.306572ms for postStartSetup
	I0729 18:36:10.693429 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:36:10.694016 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.696549 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.696902 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.696930 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.697319 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:36:10.697577 1073226 start.go:128] duration metric: took 25.038311393s to createHost
	I0729 18:36:10.697610 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.700158 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.700583 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.700619 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.700744 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.700911 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.701081 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.701185 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.701326 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:10.701553 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:10.701569 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:36:10.804004 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278170.780732198
	
	I0729 18:36:10.804029 1073226 fix.go:216] guest clock: 1722278170.780732198
	I0729 18:36:10.804037 1073226 fix.go:229] Guest: 2024-07-29 18:36:10.780732198 +0000 UTC Remote: 2024-07-29 18:36:10.69759403 +0000 UTC m=+145.775343277 (delta=83.138168ms)
	I0729 18:36:10.804055 1073226 fix.go:200] guest clock delta is within tolerance: 83.138168ms
	I0729 18:36:10.804060 1073226 start.go:83] releasing machines lock for "ha-344156-m03", held for 25.144909226s
	I0729 18:36:10.804081 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.804326 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.806889 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.807208 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.807251 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.809192 1073226 out.go:177] * Found network options:
	I0729 18:36:10.810285 1073226 out.go:177]   - NO_PROXY=192.168.39.225,192.168.39.249
	W0729 18:36:10.811261 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 18:36:10.811290 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:36:10.811309 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.811934 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.812130 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.812232 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:36:10.812293 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	W0729 18:36:10.812384 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 18:36:10.812413 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:36:10.812491 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:36:10.812516 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.815303 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815554 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815791 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.815816 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815982 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.815989 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.816009 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.816174 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.816278 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.816419 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.816427 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.816657 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.816670 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.816823 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:11.047712 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:36:11.054416 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:36:11.054489 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:36:11.070311 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:36:11.070334 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:36:11.070392 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:36:11.086440 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:36:11.100405 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:36:11.100463 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:36:11.114617 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:36:11.128823 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:36:11.254976 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:36:11.394761 1073226 docker.go:233] disabling docker service ...
	I0729 18:36:11.394843 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:36:11.410240 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:36:11.423477 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:36:11.575383 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:36:11.698095 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:36:11.712681 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:36:11.734684 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:36:11.734746 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.746693 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:36:11.746769 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.759354 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.770916 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.782464 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:36:11.794360 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.805862 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.824497 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.835395 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:36:11.847483 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:36:11.847553 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:36:11.863665 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:36:11.875512 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:12.012691 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:36:12.151992 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:36:12.152061 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:36:12.157551 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:36:12.157617 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:36:12.161416 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:36:12.208108 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:36:12.208196 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:36:12.240111 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:36:12.273439 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:36:12.274581 1073226 out.go:177]   - env NO_PROXY=192.168.39.225
	I0729 18:36:12.275678 1073226 out.go:177]   - env NO_PROXY=192.168.39.225,192.168.39.249
	I0729 18:36:12.276772 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:12.279346 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:12.279694 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:12.279727 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:12.279895 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:36:12.284123 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:36:12.296562 1073226 mustload.go:65] Loading cluster: ha-344156
	I0729 18:36:12.296800 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:12.297053 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:12.297092 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:12.312266 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0729 18:36:12.312652 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:12.313117 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:12.313140 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:12.313430 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:12.313633 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:36:12.315177 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:36:12.315475 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:12.315518 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:12.329524 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0729 18:36:12.329994 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:12.330435 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:12.330458 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:12.330724 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:12.330898 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:36:12.331048 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.148
	I0729 18:36:12.331059 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:36:12.331080 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.331224 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:36:12.331281 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:36:12.331296 1073226 certs.go:256] generating profile certs ...
	I0729 18:36:12.331393 1073226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:36:12.331425 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418
	I0729 18:36:12.331447 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.148 192.168.39.254]
	I0729 18:36:12.502377 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 ...
	I0729 18:36:12.502414 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418: {Name:mkf64b75a70f03795bfd6d7a96d4523858ab030a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.502635 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418 ...
	I0729 18:36:12.502654 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418: {Name:mk3458c01cde65378f904989ec6841bd16a376ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.502768 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:36:12.502980 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:36:12.503199 1073226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:36:12.503220 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:36:12.503248 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:36:12.503275 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:36:12.503296 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:36:12.503316 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:36:12.503341 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:36:12.503362 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:36:12.503386 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:36:12.503470 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:36:12.503516 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:36:12.503532 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:36:12.503573 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:36:12.503611 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:36:12.503647 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:36:12.503710 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:36:12.503754 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:12.503777 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:36:12.503799 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:36:12.503846 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:36:12.506959 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:12.507450 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:36:12.507476 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:12.507686 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:36:12.507911 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:36:12.508121 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:36:12.508282 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:36:12.587185 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 18:36:12.593237 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 18:36:12.604403 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 18:36:12.608634 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 18:36:12.618678 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 18:36:12.622753 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 18:36:12.632519 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 18:36:12.637031 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 18:36:12.647215 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 18:36:12.651997 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 18:36:12.662738 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 18:36:12.667087 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 18:36:12.677919 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:36:12.703036 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:36:12.728336 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:36:12.753542 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:36:12.778132 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 18:36:12.801928 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:36:12.828678 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:36:12.852387 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:36:12.877138 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:36:12.900044 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:36:12.924662 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:36:12.948505 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 18:36:12.967205 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 18:36:12.984557 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 18:36:13.003990 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 18:36:13.020955 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 18:36:13.036776 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 18:36:13.052677 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 18:36:13.068319 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:36:13.073864 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:36:13.083887 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.088395 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.088444 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.094253 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:36:13.104695 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:36:13.114896 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.119342 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.119381 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.124857 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:36:13.135324 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:36:13.145980 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.150321 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.150366 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.155994 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:36:13.166865 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:36:13.170725 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:36:13.170773 1073226 kubeadm.go:934] updating node {m03 192.168.39.148 8443 v1.30.3 crio true true} ...
	I0729 18:36:13.170893 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:36:13.170928 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:36:13.170960 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:36:13.185436 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:36:13.185502 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:36:13.185557 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:36:13.195349 1073226 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 18:36:13.195391 1073226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 18:36:13.205698 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 18:36:13.205710 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 18:36:13.205723 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:36:13.205747 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 18:36:13.205774 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:36:13.205791 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:36:13.205753 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:36:13.205850 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:36:13.213576 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 18:36:13.213601 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 18:36:13.244698 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:36:13.244711 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 18:36:13.244737 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 18:36:13.244820 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:36:13.307051 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 18:36:13.307095 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 18:36:14.109979 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 18:36:14.119412 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:36:14.135869 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:36:14.152492 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:36:14.169680 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:36:14.173965 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:36:14.186500 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:14.321621 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:36:14.339993 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:36:14.340454 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:14.340500 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:14.358705 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0729 18:36:14.359207 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:14.359749 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:14.359773 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:14.360063 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:14.360273 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:36:14.360441 1073226 start.go:317] joinCluster: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:36:14.360567 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 18:36:14.360593 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:36:14.363197 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:14.363585 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:36:14.363617 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:14.363746 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:36:14.363924 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:36:14.364081 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:36:14.364235 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:36:14.527295 1073226 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:36:14.527373 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9asq8d.g7xnumn0cs26swoe --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0729 18:36:39.774964 1073226 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9asq8d.g7xnumn0cs26swoe --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (25.24755456s)
	I0729 18:36:39.775010 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 18:36:40.493199 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156-m03 minikube.k8s.io/updated_at=2024_07_29T18_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=false
	I0729 18:36:40.617592 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344156-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 18:36:40.723210 1073226 start.go:319] duration metric: took 26.362761282s to joinCluster
	I0729 18:36:40.723310 1073226 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:36:40.723661 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:40.724574 1073226 out.go:177] * Verifying Kubernetes components...
	I0729 18:36:40.725585 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:40.998715 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:36:41.022884 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:36:41.023279 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 18:36:41.023393 1073226 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.225:8443
	I0729 18:36:41.023674 1073226 node_ready.go:35] waiting up to 6m0s for node "ha-344156-m03" to be "Ready" ...
	I0729 18:36:41.023775 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:41.023787 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:41.023798 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:41.023807 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:41.028502 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:41.524721 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:41.524747 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:41.524758 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:41.524763 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:41.527803 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:42.024049 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:42.024080 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:42.024093 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:42.024098 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:42.032252 1073226 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 18:36:42.524126 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:42.524149 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:42.524160 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:42.524166 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:42.527284 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:43.024296 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:43.024319 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:43.024328 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:43.024332 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:43.028052 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:43.028658 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:43.524150 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:43.524179 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:43.524191 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:43.524197 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:43.528017 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:44.023860 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:44.023882 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:44.023891 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:44.023895 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:44.026868 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:44.524198 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:44.524226 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:44.524236 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:44.524242 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:44.527518 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.024852 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:45.024873 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:45.024882 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:45.024885 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:45.028084 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.524279 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:45.524302 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:45.524310 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:45.524314 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:45.528299 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.528935 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:46.024817 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:46.024839 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:46.024847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:46.024852 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:46.027741 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:46.524767 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:46.524790 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:46.524798 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:46.524802 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:46.528346 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.023858 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:47.023879 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:47.023887 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:47.023891 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:47.027151 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.524914 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:47.524940 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:47.524950 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:47.524954 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:47.528915 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.530213 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:48.024626 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:48.024649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:48.024658 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:48.024661 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:48.027707 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:48.524875 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:48.524899 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:48.524911 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:48.524917 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:48.528718 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:49.024685 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:49.024709 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:49.024717 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:49.024721 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:49.028074 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:49.524342 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:49.524367 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:49.524376 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:49.524379 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:49.528239 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:50.023936 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:50.023975 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:50.023984 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:50.023989 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:50.027451 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:50.028057 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:50.524667 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:50.524691 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:50.524700 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:50.524705 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:50.527868 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:51.024138 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:51.024162 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:51.024169 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:51.024175 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:51.027542 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:51.524134 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:51.524160 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:51.524170 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:51.524176 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:51.527707 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.024014 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:52.024038 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:52.024047 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:52.024050 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:52.027157 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.523892 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:52.523915 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:52.523922 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:52.523928 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:52.527406 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.528154 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:53.024200 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:53.024226 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:53.024237 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:53.024243 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:53.027644 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:53.524023 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:53.524046 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:53.524054 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:53.524059 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:53.527700 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.024789 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.024821 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.024833 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.024847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.027739 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.028294 1073226 node_ready.go:49] node "ha-344156-m03" has status "Ready":"True"
	I0729 18:36:54.028310 1073226 node_ready.go:38] duration metric: took 13.004619418s for node "ha-344156-m03" to be "Ready" ...
	I0729 18:36:54.028320 1073226 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:36:54.028379 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:54.028387 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.028393 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.028398 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.035148 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:36:54.042767 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.042866 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5slmg
	I0729 18:36:54.042876 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.042883 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.042888 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.045742 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.046573 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.046590 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.046603 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.046609 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.050141 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.051147 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.051168 1073226 pod_ready.go:81] duration metric: took 8.377145ms for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.051177 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.051241 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h5h7v
	I0729 18:36:54.051247 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.051256 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.051262 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.054101 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.054919 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.054934 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.054943 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.054947 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.057383 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.058279 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.058300 1073226 pod_ready.go:81] duration metric: took 7.114199ms for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.058312 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.058375 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156
	I0729 18:36:54.058384 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.058395 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.058402 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.060796 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.061367 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.061381 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.061391 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.061396 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.063578 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.064054 1073226 pod_ready.go:92] pod "etcd-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.064073 1073226 pod_ready.go:81] duration metric: took 5.750702ms for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.064085 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.064142 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m02
	I0729 18:36:54.064152 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.064162 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.064171 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.066454 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.066989 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:54.067002 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.067015 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.067021 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.068946 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:36:54.069311 1073226 pod_ready.go:92] pod "etcd-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.069326 1073226 pod_ready.go:81] duration metric: took 5.234599ms for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.069333 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.225738 1073226 request.go:629] Waited for 156.312151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m03
	I0729 18:36:54.225839 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m03
	I0729 18:36:54.225851 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.225861 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.225869 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.228817 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.425782 1073226 request.go:629] Waited for 196.398328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.425865 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.425876 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.425889 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.425899 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.429350 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.430043 1073226 pod_ready.go:92] pod "etcd-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.430067 1073226 pod_ready.go:81] duration metric: took 360.728595ms for pod "etcd-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.430084 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.625120 1073226 request.go:629] Waited for 194.95698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:36:54.625196 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:36:54.625201 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.625208 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.625216 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.628252 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.825611 1073226 request.go:629] Waited for 196.408626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.825690 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.825698 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.825709 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.825719 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.830702 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:54.831596 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.831628 1073226 pod_ready.go:81] duration metric: took 401.527636ms for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.831641 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.024789 1073226 request.go:629] Waited for 193.056819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:36:55.024885 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:36:55.024896 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.024908 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.024918 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.027861 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:55.224911 1073226 request.go:629] Waited for 196.289833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:55.225029 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:55.225042 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.225067 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.225077 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.229002 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:55.229440 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:55.229460 1073226 pod_ready.go:81] duration metric: took 397.811151ms for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.229471 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.425422 1073226 request.go:629] Waited for 195.866711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m03
	I0729 18:36:55.425558 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m03
	I0729 18:36:55.425571 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.425590 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.425596 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.428925 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:55.624863 1073226 request.go:629] Waited for 195.279751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:55.624939 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:55.624947 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.624954 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.624961 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.629037 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:55.630287 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:55.630306 1073226 pod_ready.go:81] duration metric: took 400.826411ms for pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.630319 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.824875 1073226 request.go:629] Waited for 194.476725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:36:55.824974 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:36:55.824981 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.824992 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.825001 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.828768 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.025751 1073226 request.go:629] Waited for 196.356887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:56.025847 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:56.025858 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.025869 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.025879 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.029102 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.029728 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.029748 1073226 pod_ready.go:81] duration metric: took 399.418924ms for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.029760 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.224852 1073226 request.go:629] Waited for 194.999375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:36:56.224944 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:36:56.224952 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.224972 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.224997 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.229090 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:56.425629 1073226 request.go:629] Waited for 195.360713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:56.425719 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:56.425727 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.425735 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.425743 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.429462 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.430009 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.430029 1073226 pod_ready.go:81] duration metric: took 400.261416ms for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.430039 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.625157 1073226 request.go:629] Waited for 195.039979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m03
	I0729 18:36:56.625236 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m03
	I0729 18:36:56.625241 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.625248 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.625253 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.628682 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.825729 1073226 request.go:629] Waited for 196.33857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:56.825825 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:56.825836 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.825847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.825858 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.829208 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.829806 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.829830 1073226 pod_ready.go:81] duration metric: took 399.784132ms for pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.829844 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.024857 1073226 request.go:629] Waited for 194.932413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:36:57.024944 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:36:57.024952 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.024960 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.024964 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.028894 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.224851 1073226 request.go:629] Waited for 195.30286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:57.224909 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:57.224914 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.224921 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.224927 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.228320 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.229288 1073226 pod_ready.go:92] pod "kube-proxy-4p5r9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:57.229310 1073226 pod_ready.go:81] duration metric: took 399.458197ms for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.229324 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.425101 1073226 request.go:629] Waited for 195.687697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:36:57.425186 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:36:57.425194 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.425202 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.425210 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.429043 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.625314 1073226 request.go:629] Waited for 195.379021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:57.625391 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:57.625398 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.625407 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.625414 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.628918 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.629479 1073226 pod_ready.go:92] pod "kube-proxy-gp282" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:57.629502 1073226 pod_ready.go:81] duration metric: took 400.16774ms for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.629512 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w68jl" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.825577 1073226 request.go:629] Waited for 195.979776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w68jl
	I0729 18:36:57.825644 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w68jl
	I0729 18:36:57.825649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.825657 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.825664 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.829084 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.025115 1073226 request.go:629] Waited for 195.341791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:58.025190 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:58.025196 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.025204 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.025212 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.029074 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.029735 1073226 pod_ready.go:92] pod "kube-proxy-w68jl" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.029756 1073226 pod_ready.go:81] duration metric: took 400.236648ms for pod "kube-proxy-w68jl" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.029766 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.224827 1073226 request.go:629] Waited for 194.944952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:36:58.224991 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:36:58.225011 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.225039 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.225064 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.228220 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.425657 1073226 request.go:629] Waited for 196.363001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:58.425718 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:58.425723 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.425731 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.425738 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.429029 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.429594 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.429613 1073226 pod_ready.go:81] duration metric: took 399.839055ms for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.429623 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.625772 1073226 request.go:629] Waited for 196.067134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:36:58.625847 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:36:58.625852 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.625859 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.625864 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.629267 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.825657 1073226 request.go:629] Waited for 195.355459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:58.825720 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:58.825725 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.825732 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.825738 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.829198 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.829863 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.829882 1073226 pod_ready.go:81] duration metric: took 400.250514ms for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.829892 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:59.024864 1073226 request.go:629] Waited for 194.90098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m03
	I0729 18:36:59.024942 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m03
	I0729 18:36:59.024949 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.024981 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.024991 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.028464 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.225571 1073226 request.go:629] Waited for 196.360643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:59.225649 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:59.225655 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.225662 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.225666 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.229072 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.229751 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:59.229778 1073226 pod_ready.go:81] duration metric: took 399.879356ms for pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:59.229790 1073226 pod_ready.go:38] duration metric: took 5.201458046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:36:59.229810 1073226 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:36:59.229867 1073226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:36:59.246295 1073226 api_server.go:72] duration metric: took 18.522942026s to wait for apiserver process to appear ...
	I0729 18:36:59.246316 1073226 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:36:59.246338 1073226 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0729 18:36:59.252593 1073226 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0729 18:36:59.252662 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/version
	I0729 18:36:59.252672 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.252683 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.252691 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.253560 1073226 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 18:36:59.253633 1073226 api_server.go:141] control plane version: v1.30.3
	I0729 18:36:59.253652 1073226 api_server.go:131] duration metric: took 7.327939ms to wait for apiserver health ...
	I0729 18:36:59.253661 1073226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:36:59.425463 1073226 request.go:629] Waited for 171.694263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.425569 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.425581 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.425594 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.425598 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.432633 1073226 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 18:36:59.438546 1073226 system_pods.go:59] 24 kube-system pods found
	I0729 18:36:59.438574 1073226 system_pods.go:61] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:36:59.438579 1073226 system_pods.go:61] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:36:59.438583 1073226 system_pods.go:61] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:36:59.438586 1073226 system_pods.go:61] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:36:59.438592 1073226 system_pods.go:61] "etcd-ha-344156-m03" [708c9812-8669-44a2-8045-abfee39173b6] Running
	I0729 18:36:59.438595 1073226 system_pods.go:61] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:36:59.438598 1073226 system_pods.go:61] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:36:59.438603 1073226 system_pods.go:61] "kindnet-ks57n" [81bef3d8-fc4e-459e-a7d1-bb6406706ffc] Running
	I0729 18:36:59.438607 1073226 system_pods.go:61] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:36:59.438613 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:36:59.438616 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m03" [caa0c4ad-7c27-4b32-9b27-8c31b698ff94] Running
	I0729 18:36:59.438621 1073226 system_pods.go:61] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:36:59.438628 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:36:59.438631 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m03" [c51f5210-8b7f-40b6-beef-07116362f52b] Running
	I0729 18:36:59.438634 1073226 system_pods.go:61] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:36:59.438638 1073226 system_pods.go:61] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:36:59.438642 1073226 system_pods.go:61] "kube-proxy-w68jl" [973b384e-931f-462f-b46b-fb2b28400627] Running
	I0729 18:36:59.438645 1073226 system_pods.go:61] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:36:59.438649 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:36:59.438652 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m03" [3ea0d519-3b7c-4d22-a442-9d58d43876c3] Running
	I0729 18:36:59.438655 1073226 system_pods.go:61] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:36:59.438657 1073226 system_pods.go:61] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:36:59.438660 1073226 system_pods.go:61] "kube-vip-ha-344156-m03" [7deb3adf-e964-4206-a768-380b5425bb9e] Running
	I0729 18:36:59.438663 1073226 system_pods.go:61] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:36:59.438668 1073226 system_pods.go:74] duration metric: took 184.998775ms to wait for pod list to return data ...
	I0729 18:36:59.438678 1073226 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:36:59.625117 1073226 request.go:629] Waited for 186.346422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:36:59.625195 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:36:59.625202 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.625212 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.625217 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.628921 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.629064 1073226 default_sa.go:45] found service account: "default"
	I0729 18:36:59.629082 1073226 default_sa.go:55] duration metric: took 190.396612ms for default service account to be created ...
	I0729 18:36:59.629095 1073226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:36:59.825557 1073226 request.go:629] Waited for 196.368467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.825621 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.825626 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.825634 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.825640 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.833031 1073226 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 18:36:59.839260 1073226 system_pods.go:86] 24 kube-system pods found
	I0729 18:36:59.839284 1073226 system_pods.go:89] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:36:59.839290 1073226 system_pods.go:89] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:36:59.839294 1073226 system_pods.go:89] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:36:59.839298 1073226 system_pods.go:89] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:36:59.839305 1073226 system_pods.go:89] "etcd-ha-344156-m03" [708c9812-8669-44a2-8045-abfee39173b6] Running
	I0729 18:36:59.839311 1073226 system_pods.go:89] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:36:59.839320 1073226 system_pods.go:89] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:36:59.839330 1073226 system_pods.go:89] "kindnet-ks57n" [81bef3d8-fc4e-459e-a7d1-bb6406706ffc] Running
	I0729 18:36:59.839336 1073226 system_pods.go:89] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:36:59.839347 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:36:59.839354 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m03" [caa0c4ad-7c27-4b32-9b27-8c31b698ff94] Running
	I0729 18:36:59.839359 1073226 system_pods.go:89] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:36:59.839365 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:36:59.839370 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m03" [c51f5210-8b7f-40b6-beef-07116362f52b] Running
	I0729 18:36:59.839378 1073226 system_pods.go:89] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:36:59.839382 1073226 system_pods.go:89] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:36:59.839389 1073226 system_pods.go:89] "kube-proxy-w68jl" [973b384e-931f-462f-b46b-fb2b28400627] Running
	I0729 18:36:59.839392 1073226 system_pods.go:89] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:36:59.839396 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:36:59.839400 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m03" [3ea0d519-3b7c-4d22-a442-9d58d43876c3] Running
	I0729 18:36:59.839406 1073226 system_pods.go:89] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:36:59.839412 1073226 system_pods.go:89] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:36:59.839417 1073226 system_pods.go:89] "kube-vip-ha-344156-m03" [7deb3adf-e964-4206-a768-380b5425bb9e] Running
	I0729 18:36:59.839427 1073226 system_pods.go:89] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:36:59.839436 1073226 system_pods.go:126] duration metric: took 210.333714ms to wait for k8s-apps to be running ...
	I0729 18:36:59.839449 1073226 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:36:59.839501 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:36:59.854781 1073226 system_svc.go:56] duration metric: took 15.326891ms WaitForService to wait for kubelet
	I0729 18:36:59.854808 1073226 kubeadm.go:582] duration metric: took 19.131460744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:36:59.854832 1073226 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:37:00.025220 1073226 request.go:629] Waited for 170.267627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes
	I0729 18:37:00.025299 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes
	I0729 18:37:00.025306 1073226 round_trippers.go:469] Request Headers:
	I0729 18:37:00.025316 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:37:00.025322 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:37:00.030361 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:37:00.031361 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031383 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031395 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031400 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031405 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031410 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031418 1073226 node_conditions.go:105] duration metric: took 176.580777ms to run NodePressure ...
	I0729 18:37:00.031436 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:37:00.031460 1073226 start.go:255] writing updated cluster config ...
	I0729 18:37:00.031782 1073226 ssh_runner.go:195] Run: rm -f paused
	I0729 18:37:00.086434 1073226 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:37:00.088481 1073226 out.go:177] * Done! kubectl is now configured to use "ha-344156" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.613032827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278463612974582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d52e4509-e7d0-4149-b274-0231629b3fd4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.613897945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eac0594-3791-4490-9451-1e70d6b2d58d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.614073781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eac0594-3791-4490-9451-1e70d6b2d58d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.614981459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eac0594-3791-4490-9451-1e70d6b2d58d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.620579290Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c1981fc9-ef26-4be0-8f62-006bc7c88431 name=/runtime.v1.ImageService/ListImages
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.620984288Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{I
d:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 reg
istry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d86
7d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kube
-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,RepoTags:[docker.io/kindest/kindnetd:v20240719-e7903573],RepoDigests:[docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9 docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a],Size_:87174707,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=c1981fc9-ef26-4be0-8f62-006bc7c88431 name=/runtime
.v1.ImageService/ListImages
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.669544628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c70084f7-dd9a-43be-a49c-d49b547c50d7 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.669634730Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c70084f7-dd9a-43be-a49c-d49b547c50d7 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.673859706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf775616-7283-4618-9f6d-678080cd18c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.674410782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278463674375073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf775616-7283-4618-9f6d-678080cd18c8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.675600086Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dd8d31fb-9439-46fd-b989-71453f9f23a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.675899087Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9sbfq,Uid:f11563c5-3507-44f0-a103-1e8462494e13,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278221317798975,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:37:01.004554243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h5h7v,Uid:b2b09553-dd59-44ab-a738-41e872defd34,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722278090417915698,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:50.091754249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3ea00f25-122f-4a18-9d69-3606cfddf4d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278090397601719,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T18:34:50.089494513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5slmg,Uid:f2aca93c-209e-48b6-a9a5-692bdf185129,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722278090390442126,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:50.082932184Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&PodSandboxMetadata{Name:kindnet-84nqp,Uid:f4e18e53-1c72-440f-82b2-bd1b4306af12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278076425487658,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:36.111643087Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&PodSandboxMetadata{Name:kube-proxy-gp282,Uid:abf94303-b608-45b5-ae8b-9288be614a8f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278076391754673,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:36.082951745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&PodSandboxMetadata{Name:etcd-ha-344156,Uid:67610b75999e06603675bc1a64d5ef7d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722278056565120490,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.225:2379,kubernetes.io/config.hash: 67610b75999e06603675bc1a64d5ef7d,kubernetes.io/config.seen: 2024-07-29T18:34:16.075600528Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-344156,Uid:70bafc7f0ed9afe903828ea70a6c8bbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056555341094,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:m
ap[string]string{kubernetes.io/config.hash: 70bafc7f0ed9afe903828ea70a6c8bbb,kubernetes.io/config.seen: 2024-07-29T18:34:16.075599613Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-344156,Uid:5d17047d55559cfd90852a780672fb93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056554194069,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5d17047d55559cfd90852a780672fb93,kubernetes.io/config.seen: 2024-07-29T18:34:16.075598803Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-344156,Uid:d61da37ea38b5727b5710cdad0fc95fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056534928321,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.225:8443,kubernetes.io/config.hash: d61da37ea38b5727b5710cdad0fc95fd,kubernetes.io/config.seen: 2024-07-29T18:34:16.075593783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-344156,Uid:30243da5f1a98e23c72326dd278a562e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056530144998,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30243da5f1a98e23c72326dd278a562e,kubernetes.io/config.seen: 2024-07-29T18:34:16.075597470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dd8d31fb-9439-46fd-b989-71453f9f23a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.676221266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3286474-bd77-43e8-9ea5-42412bc1c95f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.677495810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3286474-bd77-43e8-9ea5-42412bc1c95f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.677922329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3286474-bd77-43e8-9ea5-42412bc1c95f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.677241745Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dad9932-5e78-4b04-b4c5-45e47e589862 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.678442486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dad9932-5e78-4b04-b4c5-45e47e589862 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.679081155Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dad9932-5e78-4b04-b4c5-45e47e589862 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.716518906Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f289aa48-14cc-4b0c-8a58-d167fd6c51bf name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.716628208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f289aa48-14cc-4b0c-8a58-d167fd6c51bf name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.718007402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c033efd7-f45e-4f0e-98df-5b33d063b1e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.718644085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278463718619666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c033efd7-f45e-4f0e-98df-5b33d063b1e6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.719060679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1cfb4bd-5ff1-4dc8-95f8-cdb3a5a4a0b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.719141089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1cfb4bd-5ff1-4dc8-95f8-cdb3a5a4a0b9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:03 ha-344156 crio[680]: time="2024-07-29 18:41:03.719464454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1cfb4bd-5ff1-4dc8-95f8-cdb3a5a4a0b9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d152449ddedd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   98fcabecdf16c       busybox-fc5497c4f-9sbfq
	1a4d13ace439f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   331a36b1d7af6       coredns-7db6d8ff4d-h5h7v
	0420967445f92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   aee8f75d6b1bb       storage-provisioner
	7d0acef755a4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   3bc8a1c2175a3       coredns-7db6d8ff4d-5slmg
	88c61cb999665       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   5312fee5fcd07       kindnet-84nqp
	ea6501e2c6d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   f041673054c6d       kube-proxy-gp282
	df682abbd9767       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   9d199e4c3c06c       kube-vip-ha-344156
	cea7dd8ee7d18       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   ec39a320a672e       kube-scheduler-ha-344156
	15f9d79f9c968       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   54990a7607809       kube-controller-manager-ha-344156
	fc27c145e7b72       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   5e0320966c0af       etcd-ha-344156
	24d097bf3e16a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   60907e40ccbbf       kube-apiserver-ha-344156
	
	
	==> coredns [1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67] <==
	[INFO] 10.244.0.4:46352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127712s
	[INFO] 10.244.0.4:46368 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105717s
	[INFO] 10.244.1.2:52208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002451s
	[INFO] 10.244.1.2:41217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133343s
	[INFO] 10.244.1.2:49751 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001332799s
	[INFO] 10.244.1.2:41663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101756s
	[INFO] 10.244.2.2:42699 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103084s
	[INFO] 10.244.2.2:43982 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096471s
	[INFO] 10.244.2.2:48234 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064109s
	[INFO] 10.244.2.2:58544 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127531s
	[INFO] 10.244.2.2:43646 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097904s
	[INFO] 10.244.0.4:41454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007042s
	[INFO] 10.244.1.2:56019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130286s
	[INFO] 10.244.1.2:49552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000419229s
	[INFO] 10.244.1.2:42570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019871s
	[INFO] 10.244.1.2:35841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085394s
	[INFO] 10.244.2.2:38179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154252s
	[INFO] 10.244.2.2:54595 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095931s
	[INFO] 10.244.0.4:52521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102943s
	[INFO] 10.244.0.4:41421 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122912s
	[INFO] 10.244.1.2:51311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000262883s
	[INFO] 10.244.1.2:51083 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108384s
	[INFO] 10.244.2.2:49034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138814s
	[INFO] 10.244.2.2:33015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141033s
	[INFO] 10.244.2.2:33854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124542s
	
	
	==> coredns [7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6] <==
	[INFO] 10.244.0.4:48527 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009667897s
	[INFO] 10.244.1.2:39280 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000586469s
	[INFO] 10.244.1.2:47729 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001573362s
	[INFO] 10.244.2.2:32959 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001674804s
	[INFO] 10.244.0.4:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137454s
	[INFO] 10.244.0.4:45474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003415625s
	[INFO] 10.244.0.4:42044 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293336s
	[INFO] 10.244.0.4:42246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000257435s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621784s
	[INFO] 10.244.1.2:47789 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179788s
	[INFO] 10.244.1.2:51271 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115306s
	[INFO] 10.244.1.2:60584 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160548s
	[INFO] 10.244.2.2:39080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143675s
	[INFO] 10.244.2.2:57667 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587169s
	[INFO] 10.244.2.2:36002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000958528s
	[INFO] 10.244.0.4:46689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001122s
	[INFO] 10.244.0.4:53528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068803s
	[INFO] 10.244.0.4:58879 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007922s
	[INFO] 10.244.2.2:40671 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165257s
	[INFO] 10.244.2.2:52385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072909s
	[INFO] 10.244.0.4:40200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101268s
	[INFO] 10.244.0.4:60214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092204s
	[INFO] 10.244.1.2:45394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209017s
	[INFO] 10.244.1.2:53252 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072648s
	[INFO] 10.244.2.2:37567 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168035s
	
	
	==> describe nodes <==
	Name:               ha-344156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:34:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-344156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be7f4c1228de4ae58c65b2a0531270c4
	  System UUID:                be7f4c12-28de-4ae5-8c65-b2a0531270c4
	  Boot ID:                    14c798b1-a7f8-4045-a5cc-f99e886c885f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sbfq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 coredns-7db6d8ff4d-5slmg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m28s
	  kube-system                 coredns-7db6d8ff4d-h5h7v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m28s
	  kube-system                 etcd-ha-344156                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m41s
	  kube-system                 kindnet-84nqp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m28s
	  kube-system                 kube-apiserver-ha-344156             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-controller-manager-ha-344156    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-proxy-gp282                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-scheduler-ha-344156             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 kube-vip-ha-344156                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m41s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m27s  kube-proxy       
	  Normal  Starting                 6m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m41s  kubelet          Node ha-344156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s  kubelet          Node ha-344156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s  kubelet          Node ha-344156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal  NodeReady                6m14s  kubelet          Node ha-344156 status is now: NodeReady
	  Normal  RegisteredNode           5m23s  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal  RegisteredNode           4m10s  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	
	
	Name:               ha-344156-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:35:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:38:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-344156-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae271825042248168626e86031e0e80b
	  System UUID:                ae271825-0422-4816-8626-e86031e0e80b
	  Boot ID:                    a5673abc-82e9-4e7a-95fa-3067a351f12f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np547                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 etcd-ha-344156-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m39s
	  kube-system                 kindnet-b85cc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m41s
	  kube-system                 kube-apiserver-ha-344156-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-controller-manager-ha-344156-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	  kube-system                 kube-proxy-4p5r9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  kube-system                 kube-scheduler-ha-344156-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-vip-ha-344156-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m35s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m42s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m42s)  kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m42s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-344156-m02 status is now: NodeNotReady
	
	
	Name:               ha-344156-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:40:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-344156-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 009a6c7b1b2049db970288d43db02f16
	  System UUID:                009a6c7b-1b20-49db-9702-88d43db02f16
	  Boot ID:                    78078a70-f452-4e76-8a2f-cc9a62ee6c44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7sxh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 etcd-ha-344156-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m28s
	  kube-system                 kindnet-ks57n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m30s
	  kube-system                 kube-apiserver-ha-344156-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-controller-manager-ha-344156-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-proxy-w68jl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 kube-scheduler-ha-344156-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  kube-system                 kube-vip-ha-344156-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m30s (x8 over 4m30s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x8 over 4m30s)  kubelet          Node ha-344156-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x7 over 4m30s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	
	
	Name:               ha-344156-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_37_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-344156-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd3c9a6740fc4ec3a7f2c8b9b2357693
	  System UUID:                cd3c9a67-40fc-4ec3-a7f2-c8b9b2357693
	  Boot ID:                    feaae67d-1b81-44aa-891a-7ad9026e22d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c84jp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m27s
	  kube-system                 kube-proxy-qjzd6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m29s (x2 over 3m29s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m29s (x2 over 3m29s)  kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m29s (x2 over 3m29s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m28s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  RegisteredNode           3m25s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  RegisteredNode           3m24s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-344156-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 18:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050664] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040228] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.762758] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.350772] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.587329] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 18:34] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.055622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058895] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.187111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118732] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.257910] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.135704] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.319915] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.051986] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.074788] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.534370] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.052219] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 18:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29] <==
	{"level":"warn","ts":"2024-07-29T18:41:03.642017Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"9d70d498f3feaf66","error":"Get \"https://192.168.39.249:2380/version\": dial tcp 192.168.39.249:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-29T18:41:03.986074Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:03.993228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:03.998614Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.016711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.026759Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.034038Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.03713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.037367Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.040156Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.046878Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.05261Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.053768Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.063646Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.067866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.071194Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.079106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.08618Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.092094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.095219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.099184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.107049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.114036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.121044Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:04.137268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:41:04 up 7 min,  0 users,  load average: 0.52, 0.39, 0.19
	Linux ha-344156 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62] <==
	I0729 18:40:29.799628       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:40:39.798882       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:40:39.799034       1 main.go:299] handling current node
	I0729 18:40:39.799111       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:40:39.799136       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:40:39.799362       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:40:39.799395       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:40:39.799464       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:40:39.799513       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:40:49.807765       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:40:49.807801       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:40:49.807970       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:40:49.808003       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:40:49.808095       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:40:49.808105       1 main.go:299] handling current node
	I0729 18:40:49.808120       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:40:49.808125       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:40:59.805430       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:40:59.805551       1 main.go:299] handling current node
	I0729 18:40:59.805589       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:40:59.805608       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:40:59.805769       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:40:59.805807       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:40:59.805876       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:40:59.805906       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472] <==
	I0729 18:34:22.992198       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:34:23.013687       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 18:34:23.203550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:34:36.052177       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 18:34:36.129994       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 18:35:23.876523       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 18:35:23.876591       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 18:35:23.876632       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.407µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 18:35:23.878428       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 18:35:23.878578       1 timeout.go:142] post-timeout activity - time-elapsed: 2.219047ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0729 18:37:03.871470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37342: use of closed network connection
	E0729 18:37:04.057833       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37356: use of closed network connection
	E0729 18:37:04.246466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37382: use of closed network connection
	E0729 18:37:04.432648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37402: use of closed network connection
	E0729 18:37:04.620133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37426: use of closed network connection
	E0729 18:37:04.808658       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37448: use of closed network connection
	E0729 18:37:04.980084       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37468: use of closed network connection
	E0729 18:37:05.158588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37488: use of closed network connection
	E0729 18:37:05.354557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37510: use of closed network connection
	E0729 18:37:05.641690       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37528: use of closed network connection
	E0729 18:37:05.820252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37552: use of closed network connection
	E0729 18:37:06.009065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37580: use of closed network connection
	E0729 18:37:06.380684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E0729 18:37:06.570438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37652: use of closed network connection
	W0729 18:39:01.543884       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.225]
	
	
	==> kube-controller-manager [15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4] <==
	I0729 18:36:35.300078       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344156-m03"
	I0729 18:37:01.026156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.740018ms"
	I0729 18:37:01.059485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.260292ms"
	I0729 18:37:01.256428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.727077ms"
	I0729 18:37:01.342248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.174449ms"
	I0729 18:37:01.370811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.436309ms"
	I0729 18:37:01.370936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.485µs"
	I0729 18:37:01.456236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.75739ms"
	I0729 18:37:01.456678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.365µs"
	I0729 18:37:01.513939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.934945ms"
	I0729 18:37:01.515211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="186.88µs"
	I0729 18:37:02.720924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.996987ms"
	I0729 18:37:02.721436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.992µs"
	I0729 18:37:02.779607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.03376ms"
	I0729 18:37:02.779677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.309µs"
	I0729 18:37:03.388871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.372464ms"
	I0729 18:37:03.389954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.586µs"
	I0729 18:37:36.003494       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-344156-m04\" does not exist"
	E0729 18:37:36.016579       1 certificate_controller.go:146] Sync csr-82bwg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-82bwg": the object has been modified; please apply your changes to the latest version and try again
	I0729 18:37:36.050956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344156-m04" podCIDRs=["10.244.3.0/24"]
	I0729 18:37:40.311641       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344156-m04"
	I0729 18:38:23.223359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	I0729 18:39:20.223021       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	I0729 18:39:20.274448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.549805ms"
	I0729 18:39:20.274546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.441µs"
	
	
	==> kube-proxy [ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b] <==
	I0729 18:34:36.768861       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:34:36.822886       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	I0729 18:34:36.879013       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:34:36.879048       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:34:36.879064       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:34:36.882396       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:34:36.883763       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:34:36.883815       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:34:36.886898       1 config.go:192] "Starting service config controller"
	I0729 18:34:36.889509       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:34:36.889619       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:34:36.889655       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:34:36.891706       1 config.go:319] "Starting node config controller"
	I0729 18:34:36.891740       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:34:36.989964       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:34:36.990034       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:34:36.991946       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4] <==
	I0729 18:37:01.014413       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-9sbfq" node="ha-344156"
	E0729 18:37:01.014649       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-np547\": pod busybox-fc5497c4f-np547 is already assigned to node \"ha-344156-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-np547" node="ha-344156-m02"
	E0729 18:37:01.014689       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 362a4dc2-ca83-4e79-a3a8-58d174f4c6c9(default/busybox-fc5497c4f-np547) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-np547"
	E0729 18:37:01.014706       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-np547\": pod busybox-fc5497c4f-np547 is already assigned to node \"ha-344156-m02\"" pod="default/busybox-fc5497c4f-np547"
	I0729 18:37:01.014885       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-np547" node="ha-344156-m02"
	E0729 18:37:36.090019       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wd88n\": pod kube-proxy-wd88n is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wd88n" node="ha-344156-m04"
	E0729 18:37:36.090421       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qb94z\": pod kindnet-qb94z is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qb94z" node="ha-344156-m04"
	E0729 18:37:36.092638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b96b5fac-f230-4c67-a7a5-bdf3591ca949(kube-system/kube-proxy-wd88n) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wd88n"
	E0729 18:37:36.092741       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bd8366ab-746c-4ca4-b11c-bf9081fbcf7c(kube-system/kindnet-qb94z) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qb94z"
	E0729 18:37:36.092958       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qb94z\": pod kindnet-qb94z is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-qb94z"
	I0729 18:37:36.093026       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qb94z" node="ha-344156-m04"
	E0729 18:37:36.092848       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wd88n\": pod kube-proxy-wd88n is already assigned to node \"ha-344156-m04\"" pod="kube-system/kube-proxy-wd88n"
	I0729 18:37:36.097444       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wd88n" node="ha-344156-m04"
	E0729 18:37:36.231093       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q27q\": pod kindnet-4q27q is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q27q" node="ha-344156-m04"
	E0729 18:37:36.231689       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5aa608fd-1380-4d1f-94ca-56974da8d2c9(kube-system/kindnet-4q27q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q27q"
	E0729 18:37:36.231884       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q27q\": pod kindnet-4q27q is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-4q27q"
	I0729 18:37:36.232096       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q27q" node="ha-344156-m04"
	E0729 18:37:36.231370       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rdbxm\": pod kube-proxy-rdbxm is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rdbxm" node="ha-344156-m04"
	E0729 18:37:36.232528       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb08b275-293b-47c4-91ac-2281cd4eee08(kube-system/kube-proxy-rdbxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rdbxm"
	E0729 18:37:36.232576       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rdbxm\": pod kube-proxy-rdbxm is already assigned to node \"ha-344156-m04\"" pod="kube-system/kube-proxy-rdbxm"
	I0729 18:37:36.232602       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rdbxm" node="ha-344156-m04"
	E0729 18:37:38.016658       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mg7rg\": pod kindnet-mg7rg is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mg7rg" node="ha-344156-m04"
	E0729 18:37:38.018543       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f6aa64ee-b737-4975-9d11-00d78dbc3fe6(kube-system/kindnet-mg7rg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mg7rg"
	E0729 18:37:38.020350       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mg7rg\": pod kindnet-mg7rg is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-mg7rg"
	I0729 18:37:38.020440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mg7rg" node="ha-344156-m04"
	
	
	==> kubelet <==
	Jul 29 18:36:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:37:01 ha-344156 kubelet[1368]: I0729 18:37:01.004929    1368 topology_manager.go:215] "Topology Admit Handler" podUID="f11563c5-3507-44f0-a103-1e8462494e13" podNamespace="default" podName="busybox-fc5497c4f-9sbfq"
	Jul 29 18:37:01 ha-344156 kubelet[1368]: I0729 18:37:01.051178    1368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xn9\" (UniqueName: \"kubernetes.io/projected/f11563c5-3507-44f0-a103-1e8462494e13-kube-api-access-n9xn9\") pod \"busybox-fc5497c4f-9sbfq\" (UID: \"f11563c5-3507-44f0-a103-1e8462494e13\") " pod="default/busybox-fc5497c4f-9sbfq"
	Jul 29 18:37:02 ha-344156 kubelet[1368]: I0729 18:37:02.766104    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-9sbfq" podStartSLOduration=1.863563977 podStartE2EDuration="2.766056636s" podCreationTimestamp="2024-07-29 18:37:00 +0000 UTC" firstStartedPulling="2024-07-29 18:37:01.56402347 +0000 UTC m=+158.591994413" lastFinishedPulling="2024-07-29 18:37:02.466516133 +0000 UTC m=+159.494487072" observedRunningTime="2024-07-29 18:37:02.765170362 +0000 UTC m=+159.793141318" watchObservedRunningTime="2024-07-29 18:37:02.766056636 +0000 UTC m=+159.794027595"
	Jul 29 18:37:05 ha-344156 kubelet[1368]: E0729 18:37:05.642148    1368 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50180->127.0.0.1:42963: write tcp 127.0.0.1:50180->127.0.0.1:42963: write: broken pipe
	Jul 29 18:37:23 ha-344156 kubelet[1368]: E0729 18:37:23.118090    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:37:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:38:23 ha-344156 kubelet[1368]: E0729 18:38:23.120861    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:38:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:39:23 ha-344156 kubelet[1368]: E0729 18:39:23.117779    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:39:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:40:23 ha-344156 kubelet[1368]: E0729 18:40:23.119914    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:40:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344156 -n ha-344156
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (51.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (3.195497678s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:08.676137 1078071 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:08.676256 1078071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:08.676264 1078071 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:08.676268 1078071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:08.676468 1078071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:08.676621 1078071 out.go:298] Setting JSON to false
	I0729 18:41:08.676649 1078071 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:08.676764 1078071 notify.go:220] Checking for updates...
	I0729 18:41:08.676995 1078071 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:08.677010 1078071 status.go:255] checking status of ha-344156 ...
	I0729 18:41:08.677396 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.677445 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.693972 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0729 18:41:08.694446 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.695036 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.695062 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.695377 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.695582 1078071 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:08.697113 1078071 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:08.697131 1078071 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:08.697427 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.697472 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.712348 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40257
	I0729 18:41:08.712857 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.713297 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.713315 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.713666 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.713814 1078071 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:08.716575 1078071 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:08.716986 1078071 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:08.717021 1078071 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:08.717174 1078071 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:08.717472 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.717518 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.732548 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40447
	I0729 18:41:08.732937 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.733394 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.733411 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.733770 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.733977 1078071 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:08.734164 1078071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:08.734196 1078071 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:08.736911 1078071 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:08.737377 1078071 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:08.737413 1078071 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:08.737656 1078071 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:08.737804 1078071 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:08.737961 1078071 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:08.738110 1078071 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:08.822600 1078071 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:08.829555 1078071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:08.844877 1078071 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:08.844919 1078071 api_server.go:166] Checking apiserver status ...
	I0729 18:41:08.844961 1078071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:08.858516 1078071 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:08.868281 1078071 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:08.868336 1078071 ssh_runner.go:195] Run: ls
	I0729 18:41:08.872566 1078071 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:08.876804 1078071 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:08.876825 1078071 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:08.876835 1078071 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:08.876860 1078071 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:08.877140 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.877172 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.892859 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
	I0729 18:41:08.893308 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.893777 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.893800 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.894113 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.894320 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:08.895946 1078071 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:08.895965 1078071 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:08.896251 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.896286 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.911835 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0729 18:41:08.912292 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.912764 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.912786 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.913112 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.913306 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:08.916164 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:08.916590 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:08.916608 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:08.916740 1078071 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:08.917163 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:08.917216 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:08.931903 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44979
	I0729 18:41:08.932364 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:08.932833 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:08.932851 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:08.933175 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:08.933350 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:08.933529 1078071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:08.933547 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:08.936128 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:08.936503 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:08.936530 1078071 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:08.936665 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:08.936820 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:08.936923 1078071 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:08.937064 1078071 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:11.479182 1078071 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:11.479294 1078071 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:11.479319 1078071 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:11.479331 1078071 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:11.479369 1078071 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:11.479384 1078071 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:11.479757 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.479867 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.495480 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44075
	I0729 18:41:11.495967 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.496498 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.496521 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.496830 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.497057 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:11.498537 1078071 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:11.498556 1078071 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:11.498900 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.498936 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.514455 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0729 18:41:11.514921 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.515400 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.515417 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.515738 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.515911 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:11.518386 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:11.518771 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:11.518810 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:11.518906 1078071 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:11.519195 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.519235 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.534214 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37927
	I0729 18:41:11.534664 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.535167 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.535192 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.535498 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.535696 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:11.535915 1078071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:11.535945 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:11.538376 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:11.538786 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:11.538824 1078071 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:11.539028 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:11.539194 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:11.539330 1078071 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:11.539452 1078071 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:11.618237 1078071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:11.633116 1078071 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:11.633147 1078071 api_server.go:166] Checking apiserver status ...
	I0729 18:41:11.633188 1078071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:11.647006 1078071 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:11.656643 1078071 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:11.656723 1078071 ssh_runner.go:195] Run: ls
	I0729 18:41:11.661084 1078071 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:11.666924 1078071 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:11.666951 1078071 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:11.666964 1078071 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:11.666988 1078071 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:11.667332 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.667369 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.682480 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0729 18:41:11.682996 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.683511 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.683532 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.683831 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.684011 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:11.685400 1078071 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:11.685415 1078071 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:11.685688 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.685724 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.701151 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I0729 18:41:11.701612 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.702085 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.702107 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.702440 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.702676 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:11.705583 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:11.706049 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:11.706077 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:11.706184 1078071 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:11.706475 1078071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:11.706519 1078071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:11.721499 1078071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33303
	I0729 18:41:11.721951 1078071 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:11.722466 1078071 main.go:141] libmachine: Using API Version  1
	I0729 18:41:11.722492 1078071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:11.722811 1078071 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:11.723013 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:11.723226 1078071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:11.723249 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:11.725759 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:11.726169 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:11.726196 1078071 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:11.726338 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:11.726518 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:11.726656 1078071 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:11.726789 1078071 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:11.810259 1078071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:11.827206 1078071 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (5.179629909s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:12.833147 1078173 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:12.833332 1078173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:12.833344 1078173 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:12.833348 1078173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:12.833526 1078173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:12.833733 1078173 out.go:298] Setting JSON to false
	I0729 18:41:12.833767 1078173 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:12.833886 1078173 notify.go:220] Checking for updates...
	I0729 18:41:12.834118 1078173 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:12.834135 1078173 status.go:255] checking status of ha-344156 ...
	I0729 18:41:12.834498 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:12.834558 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:12.849645 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45795
	I0729 18:41:12.850141 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:12.850736 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:12.850759 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:12.851116 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:12.851362 1078173 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:12.852910 1078173 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:12.852929 1078173 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:12.853213 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:12.853248 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:12.869227 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0729 18:41:12.869733 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:12.870200 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:12.870220 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:12.870582 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:12.870783 1078173 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:12.873380 1078173 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:12.873724 1078173 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:12.873746 1078173 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:12.873902 1078173 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:12.874222 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:12.874273 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:12.889822 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36487
	I0729 18:41:12.890415 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:12.890975 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:12.891001 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:12.891295 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:12.891521 1078173 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:12.891784 1078173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:12.891843 1078173 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:12.894506 1078173 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:12.894994 1078173 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:12.895034 1078173 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:12.895200 1078173 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:12.895388 1078173 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:12.895567 1078173 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:12.895698 1078173 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:12.978670 1078173 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:12.984896 1078173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:13.000234 1078173 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:13.000263 1078173 api_server.go:166] Checking apiserver status ...
	I0729 18:41:13.000294 1078173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:13.015038 1078173 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:13.025302 1078173 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:13.025356 1078173 ssh_runner.go:195] Run: ls
	I0729 18:41:13.029779 1078173 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:13.035590 1078173 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:13.035616 1078173 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:13.035630 1078173 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:13.035662 1078173 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:13.036061 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:13.036107 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:13.051161 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35387
	I0729 18:41:13.051592 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:13.052055 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:13.052081 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:13.052467 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:13.052665 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:13.054200 1078173 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:13.054220 1078173 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:13.054623 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:13.054675 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:13.070187 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0729 18:41:13.070548 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:13.070988 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:13.071008 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:13.071308 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:13.071492 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:13.074277 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:13.074763 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:13.074791 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:13.074953 1078173 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:13.075258 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:13.075290 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:13.089272 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44703
	I0729 18:41:13.089654 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:13.090058 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:13.090082 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:13.090421 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:13.090621 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:13.090805 1078173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:13.090824 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:13.093415 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:13.093886 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:13.093914 1078173 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:13.094030 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:13.094214 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:13.094360 1078173 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:13.094642 1078173 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:14.551181 1078173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:14.551248 1078173 retry.go:31] will retry after 323.946419ms: dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:17.623165 1078173 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:17.623287 1078173 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:17.623310 1078173 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:17.623318 1078173 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:17.623346 1078173 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:17.623363 1078173 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:17.623705 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.623755 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.639102 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0729 18:41:17.639566 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.640101 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.640127 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.640456 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.640668 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:17.642269 1078173 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:17.642286 1078173 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:17.642567 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.642607 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.657685 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I0729 18:41:17.658114 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.658557 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.658583 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.658889 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.659055 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:17.661468 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:17.661874 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:17.661910 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:17.662032 1078173 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:17.662382 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.662428 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.676413 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0729 18:41:17.676762 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.677209 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.677240 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.677551 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.677744 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:17.677950 1078173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:17.677976 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:17.680861 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:17.681281 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:17.681306 1078173 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:17.681480 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:17.681646 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:17.681778 1078173 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:17.681972 1078173 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:17.763382 1078173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:17.778772 1078173 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:17.778804 1078173 api_server.go:166] Checking apiserver status ...
	I0729 18:41:17.778839 1078173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:17.792157 1078173 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:17.800936 1078173 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:17.800997 1078173 ssh_runner.go:195] Run: ls
	I0729 18:41:17.805316 1078173 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:17.809590 1078173 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:17.809611 1078173 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:17.809620 1078173 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:17.809634 1078173 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:17.809910 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.809947 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.825053 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0729 18:41:17.825616 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.826103 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.826130 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.826493 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.826685 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:17.828072 1078173 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:17.828090 1078173 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:17.828402 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.828448 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.844071 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0729 18:41:17.844429 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.844928 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.844951 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.845269 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.845446 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:17.847780 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:17.848242 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:17.848265 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:17.848422 1078173 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:17.848737 1078173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:17.848770 1078173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:17.863668 1078173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0729 18:41:17.864088 1078173 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:17.864560 1078173 main.go:141] libmachine: Using API Version  1
	I0729 18:41:17.864586 1078173 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:17.864890 1078173 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:17.865094 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:17.865267 1078173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:17.865285 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:17.867897 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:17.868249 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:17.868269 1078173 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:17.868388 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:17.868550 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:17.868702 1078173 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:17.868824 1078173 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:17.954550 1078173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:17.967576 1078173 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (4.649718462s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:19.826166 1078273 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:19.826417 1078273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:19.826427 1078273 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:19.826431 1078273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:19.826603 1078273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:19.826762 1078273 out.go:298] Setting JSON to false
	I0729 18:41:19.826788 1078273 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:19.827220 1078273 notify.go:220] Checking for updates...
	I0729 18:41:19.828223 1078273 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:19.828421 1078273 status.go:255] checking status of ha-344156 ...
	I0729 18:41:19.828909 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:19.828960 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:19.845114 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33723
	I0729 18:41:19.845588 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:19.846129 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:19.846153 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:19.846495 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:19.846711 1078273 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:19.848291 1078273 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:19.848311 1078273 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:19.848697 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:19.848738 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:19.863215 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0729 18:41:19.863678 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:19.864144 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:19.864165 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:19.864451 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:19.864656 1078273 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:19.867746 1078273 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:19.868292 1078273 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:19.868330 1078273 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:19.868470 1078273 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:19.868843 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:19.868887 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:19.883420 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34333
	I0729 18:41:19.883746 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:19.884160 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:19.884180 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:19.884476 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:19.884669 1078273 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:19.884852 1078273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:19.884873 1078273 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:19.887741 1078273 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:19.888129 1078273 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:19.888156 1078273 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:19.888281 1078273 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:19.888446 1078273 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:19.888600 1078273 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:19.888720 1078273 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:19.971141 1078273 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:19.977385 1078273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:19.991475 1078273 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:19.991501 1078273 api_server.go:166] Checking apiserver status ...
	I0729 18:41:19.991529 1078273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:20.006049 1078273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:20.016446 1078273 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:20.016501 1078273 ssh_runner.go:195] Run: ls
	I0729 18:41:20.021378 1078273 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:20.026612 1078273 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:20.026638 1078273 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:20.026652 1078273 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:20.026675 1078273 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:20.027125 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:20.027173 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:20.042656 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41673
	I0729 18:41:20.043117 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:20.043629 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:20.043650 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:20.043947 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:20.044136 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:20.045705 1078273 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:20.045721 1078273 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:20.045991 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:20.046051 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:20.061557 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35255
	I0729 18:41:20.062028 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:20.062579 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:20.062601 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:20.062930 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:20.063118 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:20.065602 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:20.065999 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:20.066026 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:20.066117 1078273 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:20.066436 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:20.066471 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:20.081115 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I0729 18:41:20.081475 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:20.081965 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:20.081990 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:20.082318 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:20.082527 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:20.082718 1078273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:20.082740 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:20.085530 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:20.086025 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:20.086051 1078273 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:20.086305 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:20.086490 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:20.086660 1078273 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:20.086816 1078273 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:20.695107 1078273 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:20.695180 1078273 retry.go:31] will retry after 322.041563ms: dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:24.087121 1078273 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:24.087228 1078273 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:24.087246 1078273 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:24.087256 1078273 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:24.087274 1078273 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:24.087294 1078273 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:24.087625 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.087682 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.102817 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38273
	I0729 18:41:24.103360 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.103871 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.103894 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.104213 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.104424 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:24.105873 1078273 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:24.105892 1078273 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:24.106299 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.106345 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.120748 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 18:41:24.121094 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.121522 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.121543 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.121798 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.121959 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:24.124612 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:24.125028 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:24.125052 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:24.125211 1078273 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:24.125530 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.125567 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.139613 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44401
	I0729 18:41:24.139970 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.140421 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.140446 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.140793 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.140994 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:24.141177 1078273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:24.141197 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:24.143938 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:24.144393 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:24.144419 1078273 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:24.144587 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:24.144757 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:24.144916 1078273 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:24.145058 1078273 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:24.226662 1078273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:24.242232 1078273 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:24.242263 1078273 api_server.go:166] Checking apiserver status ...
	I0729 18:41:24.242306 1078273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:24.257075 1078273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:24.267718 1078273 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:24.267768 1078273 ssh_runner.go:195] Run: ls
	I0729 18:41:24.271871 1078273 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:24.276004 1078273 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:24.276036 1078273 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:24.276049 1078273 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:24.276072 1078273 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:24.276395 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.276431 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.292089 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 18:41:24.292614 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.293147 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.293175 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.293558 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.293769 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:24.295651 1078273 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:24.295671 1078273 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:24.296092 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.296140 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.310913 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I0729 18:41:24.311286 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.311740 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.311760 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.312133 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.312333 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:24.314624 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:24.315047 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:24.315088 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:24.315216 1078273 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:24.315511 1078273 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:24.315546 1078273 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:24.330761 1078273 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0729 18:41:24.331206 1078273 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:24.331698 1078273 main.go:141] libmachine: Using API Version  1
	I0729 18:41:24.331717 1078273 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:24.332088 1078273 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:24.332293 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:24.332534 1078273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:24.332569 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:24.335530 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:24.335954 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:24.335990 1078273 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:24.336114 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:24.336284 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:24.336421 1078273 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:24.336553 1078273 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:24.418455 1078273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:24.431935 1078273 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (4.386385546s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:26.559323 1078389 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:26.559624 1078389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:26.559637 1078389 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:26.559644 1078389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:26.559883 1078389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:26.560059 1078389 out.go:298] Setting JSON to false
	I0729 18:41:26.560089 1078389 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:26.560197 1078389 notify.go:220] Checking for updates...
	I0729 18:41:26.560481 1078389 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:26.560502 1078389 status.go:255] checking status of ha-344156 ...
	I0729 18:41:26.560995 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.561064 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.576958 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35889
	I0729 18:41:26.577374 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.578072 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.578112 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.578480 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.578724 1078389 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:26.580273 1078389 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:26.580294 1078389 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:26.580701 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.580777 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.597202 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I0729 18:41:26.597611 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.598137 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.598171 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.598433 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.598636 1078389 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:26.601278 1078389 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:26.601700 1078389 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:26.601723 1078389 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:26.601850 1078389 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:26.602128 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.602161 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.618021 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0729 18:41:26.618422 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.618918 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.618943 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.619224 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.619417 1078389 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:26.619582 1078389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:26.619603 1078389 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:26.622200 1078389 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:26.622566 1078389 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:26.622591 1078389 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:26.622735 1078389 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:26.622937 1078389 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:26.623104 1078389 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:26.623257 1078389 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:26.710594 1078389 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:26.717096 1078389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:26.732609 1078389 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:26.732636 1078389 api_server.go:166] Checking apiserver status ...
	I0729 18:41:26.732667 1078389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:26.745887 1078389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:26.755201 1078389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:26.755243 1078389 ssh_runner.go:195] Run: ls
	I0729 18:41:26.759348 1078389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:26.764010 1078389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:26.764041 1078389 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:26.764053 1078389 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:26.764077 1078389 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:26.764410 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.764453 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.779637 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39277
	I0729 18:41:26.780047 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.780506 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.780527 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.780886 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.781049 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:26.782523 1078389 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:26.782543 1078389 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:26.782953 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.782990 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.797856 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44867
	I0729 18:41:26.798256 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.798785 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.798807 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.799214 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.799424 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:26.802384 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:26.802832 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:26.802877 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:26.803024 1078389 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:26.803403 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:26.803452 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:26.818040 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45681
	I0729 18:41:26.818430 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:26.818919 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:26.818945 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:26.819259 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:26.819463 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:26.819657 1078389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:26.819699 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:26.822401 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:26.822806 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:26.822832 1078389 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:26.823015 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:26.823190 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:26.823348 1078389 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:26.823460 1078389 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:27.159099 1078389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:27.159166 1078389 retry.go:31] will retry after 318.88535ms: dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:30.551115 1078389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:30.551232 1078389 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:30.551254 1078389 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:30.551262 1078389 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:30.551283 1078389 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:30.551291 1078389 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:30.551648 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.551694 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.566787 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38323
	I0729 18:41:30.567278 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.567833 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.567859 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.568158 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.568387 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:30.570010 1078389 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:30.570026 1078389 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:30.570385 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.570429 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.585838 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37999
	I0729 18:41:30.586228 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.586677 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.586703 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.587095 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.587320 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:30.590241 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:30.590667 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:30.590694 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:30.590871 1078389 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:30.591169 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.591239 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.605632 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0729 18:41:30.606046 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.606492 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.606519 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.606860 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.607058 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:30.607254 1078389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:30.607279 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:30.609982 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:30.610385 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:30.610409 1078389 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:30.610574 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:30.610742 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:30.610921 1078389 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:30.611058 1078389 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:30.690378 1078389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:30.705712 1078389 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:30.705746 1078389 api_server.go:166] Checking apiserver status ...
	I0729 18:41:30.705785 1078389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:30.719934 1078389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:30.731143 1078389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:30.731200 1078389 ssh_runner.go:195] Run: ls
	I0729 18:41:30.735247 1078389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:30.741367 1078389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:30.741387 1078389 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:30.741395 1078389 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:30.741410 1078389 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:30.741693 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.741725 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.757448 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37501
	I0729 18:41:30.757945 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.758473 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.758507 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.758878 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.759070 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:30.760759 1078389 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:30.760779 1078389 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:30.761182 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.761225 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.776354 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0729 18:41:30.776722 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.777126 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.777146 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.777402 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.777557 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:30.780444 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:30.780858 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:30.780886 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:30.781023 1078389 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:30.781350 1078389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:30.781396 1078389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:30.796347 1078389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40797
	I0729 18:41:30.796722 1078389 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:30.797180 1078389 main.go:141] libmachine: Using API Version  1
	I0729 18:41:30.797204 1078389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:30.797516 1078389 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:30.797710 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:30.797889 1078389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:30.797908 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:30.800645 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:30.801090 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:30.801120 1078389 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:30.801248 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:30.801450 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:30.801632 1078389 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:30.801811 1078389 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:30.886221 1078389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:30.900781 1078389 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (4.535418517s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:32.681416 1078489 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:32.681683 1078489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:32.681693 1078489 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:32.681697 1078489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:32.681911 1078489 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:32.682129 1078489 out.go:298] Setting JSON to false
	I0729 18:41:32.682161 1078489 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:32.682289 1078489 notify.go:220] Checking for updates...
	I0729 18:41:32.682630 1078489 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:32.682647 1078489 status.go:255] checking status of ha-344156 ...
	I0729 18:41:32.683125 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.683198 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.701785 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38177
	I0729 18:41:32.702168 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.702801 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.702831 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.703279 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.703501 1078489 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:32.705075 1078489 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:32.705091 1078489 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:32.705359 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.705392 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.720710 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40521
	I0729 18:41:32.721053 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.721497 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.721545 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.721861 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.722076 1078489 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:32.724589 1078489 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:32.725011 1078489 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:32.725043 1078489 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:32.725139 1078489 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:32.725567 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.725619 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.739967 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41205
	I0729 18:41:32.740366 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.740822 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.740840 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.741107 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.741285 1078489 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:32.741519 1078489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:32.741542 1078489 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:32.744243 1078489 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:32.744654 1078489 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:32.744688 1078489 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:32.744802 1078489 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:32.744988 1078489 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:32.745155 1078489 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:32.745320 1078489 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:32.830996 1078489 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:32.837452 1078489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:32.852554 1078489 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:32.852587 1078489 api_server.go:166] Checking apiserver status ...
	I0729 18:41:32.852627 1078489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:32.867337 1078489 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:32.876715 1078489 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:32.876782 1078489 ssh_runner.go:195] Run: ls
	I0729 18:41:32.881150 1078489 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:32.885310 1078489 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:32.885335 1078489 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:32.885346 1078489 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:32.885365 1078489 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:32.885641 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.885678 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.900821 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36877
	I0729 18:41:32.901239 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.901682 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.901700 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.902026 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.902202 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:32.903745 1078489 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:32.903762 1078489 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:32.904084 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.904118 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.918321 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I0729 18:41:32.918705 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.919144 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.919165 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.919447 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.919618 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:32.922293 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:32.922711 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:32.922739 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:32.922900 1078489 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:32.923176 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:32.923208 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:32.938223 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42635
	I0729 18:41:32.938578 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:32.939032 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:32.939061 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:32.939402 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:32.939590 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:32.939774 1078489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:32.939794 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:32.942239 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:32.942686 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:32.942712 1078489 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:32.942862 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:32.943011 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:32.943181 1078489 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:32.943309 1078489 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:33.623101 1078489 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:33.623155 1078489 retry.go:31] will retry after 148.469309ms: dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:36.823143 1078489 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:36.823262 1078489 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:36.823282 1078489 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:36.823289 1078489 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:36.823309 1078489 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:36.823316 1078489 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:36.823697 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:36.823757 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:36.839583 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40985
	I0729 18:41:36.840033 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:36.840594 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:36.840614 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:36.840910 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:36.841076 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:36.842460 1078489 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:36.842481 1078489 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:36.842771 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:36.842816 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:36.856993 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0729 18:41:36.857372 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:36.857848 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:36.857869 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:36.858189 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:36.858414 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:36.860972 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:36.861352 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:36.861374 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:36.861510 1078489 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:36.861818 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:36.861864 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:36.877353 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0729 18:41:36.877727 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:36.878152 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:36.878172 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:36.878460 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:36.878670 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:36.878880 1078489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:36.878906 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:36.881520 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:36.881932 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:36.881960 1078489 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:36.882124 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:36.882302 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:36.882441 1078489 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:36.882603 1078489 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:36.966911 1078489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:36.982715 1078489 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:36.982743 1078489 api_server.go:166] Checking apiserver status ...
	I0729 18:41:36.982791 1078489 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:36.997560 1078489 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:37.009014 1078489 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:37.009068 1078489 ssh_runner.go:195] Run: ls
	I0729 18:41:37.013099 1078489 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:37.017452 1078489 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:37.017475 1078489 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:37.017484 1078489 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:37.017498 1078489 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:37.017782 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:37.017820 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:37.033277 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38035
	I0729 18:41:37.033610 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:37.034040 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:37.034071 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:37.034400 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:37.034607 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:37.036112 1078489 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:37.036131 1078489 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:37.036431 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:37.036473 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:37.051151 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0729 18:41:37.051520 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:37.051981 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:37.052001 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:37.052284 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:37.052456 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:37.055078 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:37.055438 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:37.055467 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:37.055560 1078489 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:37.055839 1078489 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:37.055892 1078489 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:37.069762 1078489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0729 18:41:37.070107 1078489 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:37.070550 1078489 main.go:141] libmachine: Using API Version  1
	I0729 18:41:37.070575 1078489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:37.070879 1078489 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:37.071065 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:37.071257 1078489 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:37.071278 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:37.073688 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:37.074023 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:37.074050 1078489 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:37.074222 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:37.074388 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:37.074499 1078489 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:37.074656 1078489 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:37.157963 1078489 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:37.172174 1078489 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (3.704016895s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:43.047891 1078605 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:43.048162 1078605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:43.048171 1078605 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:43.048175 1078605 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:43.048345 1078605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:43.048493 1078605 out.go:298] Setting JSON to false
	I0729 18:41:43.048519 1078605 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:43.048628 1078605 notify.go:220] Checking for updates...
	I0729 18:41:43.048915 1078605 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:43.048934 1078605 status.go:255] checking status of ha-344156 ...
	I0729 18:41:43.049427 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.049494 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.067491 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34615
	I0729 18:41:43.067916 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.068486 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.068511 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.068892 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.069092 1078605 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:43.070918 1078605 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:43.070938 1078605 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:43.071293 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.071351 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.086929 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I0729 18:41:43.087376 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.087836 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.087863 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.088234 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.088439 1078605 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:43.091201 1078605 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:43.091632 1078605 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:43.091660 1078605 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:43.091794 1078605 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:43.092073 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.092114 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.106292 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33705
	I0729 18:41:43.106646 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.107134 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.107154 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.107484 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.107684 1078605 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:43.107877 1078605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:43.107913 1078605 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:43.110815 1078605 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:43.111224 1078605 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:43.111250 1078605 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:43.111407 1078605 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:43.111582 1078605 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:43.111725 1078605 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:43.111835 1078605 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:43.195784 1078605 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:43.202160 1078605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:43.218514 1078605 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:43.218542 1078605 api_server.go:166] Checking apiserver status ...
	I0729 18:41:43.218573 1078605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:43.231737 1078605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:43.240727 1078605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:43.240804 1078605 ssh_runner.go:195] Run: ls
	I0729 18:41:43.244811 1078605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:43.248977 1078605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:43.248997 1078605 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:43.249007 1078605 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:43.249023 1078605 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:43.249306 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.249370 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.264315 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41589
	I0729 18:41:43.264682 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.265179 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.265201 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.265544 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.265754 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:43.267276 1078605 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:41:43.267295 1078605 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:43.267568 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.267605 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.282049 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35681
	I0729 18:41:43.282433 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.282914 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.282952 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.283310 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.283512 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:41:43.286112 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:43.286517 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:43.286548 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:43.286678 1078605 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:41:43.287017 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:43.287053 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:43.301793 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I0729 18:41:43.302174 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:43.302681 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:43.302701 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:43.303016 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:43.303208 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:41:43.303388 1078605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:43.303408 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:41:43.306449 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:43.307023 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:41:43.307047 1078605 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:41:43.307209 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:41:43.307376 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:41:43.307544 1078605 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:41:43.307696 1078605 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	W0729 18:41:46.359055 1078605 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.249:22: connect: no route to host
	W0729 18:41:46.359161 1078605 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	E0729 18:41:46.359178 1078605 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:46.359185 1078605 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:41:46.359209 1078605 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.249:22: connect: no route to host
	I0729 18:41:46.359217 1078605 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:46.359522 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.359565 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.375959 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0729 18:41:46.376378 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.376869 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.376892 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.377203 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.377466 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:46.379324 1078605 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:46.379344 1078605 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:46.379688 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.379724 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.395657 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I0729 18:41:46.396141 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.396559 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.396589 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.396939 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.397111 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:46.400193 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:46.400618 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:46.400655 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:46.400793 1078605 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:46.401140 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.401183 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.416330 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0729 18:41:46.416743 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.417181 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.417204 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.417526 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.417717 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:46.417939 1078605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:46.417960 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:46.420636 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:46.421090 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:46.421115 1078605 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:46.421298 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:46.421473 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:46.421615 1078605 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:46.421743 1078605 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:46.502084 1078605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:46.516877 1078605 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:46.516917 1078605 api_server.go:166] Checking apiserver status ...
	I0729 18:41:46.516966 1078605 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:46.532256 1078605 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:46.541561 1078605 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:46.541614 1078605 ssh_runner.go:195] Run: ls
	I0729 18:41:46.546934 1078605 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:46.550969 1078605 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:46.550992 1078605 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:46.551000 1078605 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:46.551016 1078605 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:46.551347 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.551385 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.566881 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0729 18:41:46.567304 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.567758 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.567784 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.568084 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.568261 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:46.569867 1078605 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:46.569882 1078605 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:46.570162 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.570193 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.584242 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40359
	I0729 18:41:46.584693 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.585195 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.585217 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.585524 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.585721 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:46.588597 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:46.589015 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:46.589043 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:46.589167 1078605 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:46.589519 1078605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:46.589565 1078605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:46.604578 1078605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45651
	I0729 18:41:46.605020 1078605 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:46.605479 1078605 main.go:141] libmachine: Using API Version  1
	I0729 18:41:46.605497 1078605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:46.605811 1078605 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:46.605976 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:46.606178 1078605 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:46.606205 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:46.608812 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:46.609168 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:46.609188 1078605 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:46.609321 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:46.609473 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:46.609623 1078605 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:46.609781 1078605 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:46.694393 1078605 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:46.707805 1078605 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 7 (632.480463ms)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-344156-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:41:57.537065 1078762 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:41:57.537249 1078762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:57.537270 1078762 out.go:304] Setting ErrFile to fd 2...
	I0729 18:41:57.537276 1078762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:41:57.537697 1078762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:41:57.537909 1078762 out.go:298] Setting JSON to false
	I0729 18:41:57.537940 1078762 mustload.go:65] Loading cluster: ha-344156
	I0729 18:41:57.538052 1078762 notify.go:220] Checking for updates...
	I0729 18:41:57.538288 1078762 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:41:57.538301 1078762 status.go:255] checking status of ha-344156 ...
	I0729 18:41:57.538676 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.538731 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.554137 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
	I0729 18:41:57.554633 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.555292 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.555317 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.555673 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.555876 1078762 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:41:57.557522 1078762 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:41:57.557542 1078762 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:57.557883 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.557920 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.572811 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I0729 18:41:57.573200 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.573678 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.573705 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.574044 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.574236 1078762 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:41:57.576764 1078762 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:57.577221 1078762 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:57.577253 1078762 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:57.577353 1078762 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:41:57.577665 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.577710 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.592497 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35245
	I0729 18:41:57.592969 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.593441 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.593471 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.594043 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.594466 1078762 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:41:57.594716 1078762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:57.594755 1078762 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:41:57.597371 1078762 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:57.597776 1078762 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:41:57.597814 1078762 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:41:57.597944 1078762 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:41:57.598124 1078762 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:41:57.598271 1078762 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:41:57.598406 1078762 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:41:57.682958 1078762 ssh_runner.go:195] Run: systemctl --version
	I0729 18:41:57.690508 1078762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:57.706192 1078762 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:57.706228 1078762 api_server.go:166] Checking apiserver status ...
	I0729 18:41:57.706264 1078762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:57.721224 1078762 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup
	W0729 18:41:57.730739 1078762 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1151/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:57.730782 1078762 ssh_runner.go:195] Run: ls
	I0729 18:41:57.735289 1078762 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:57.739902 1078762 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:57.739923 1078762 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:41:57.739933 1078762 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:57.739950 1078762 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:41:57.740243 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.740280 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.755627 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32915
	I0729 18:41:57.756007 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.756483 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.756506 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.756855 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.757048 1078762 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:41:57.758473 1078762 status.go:330] ha-344156-m02 host status = "Stopped" (err=<nil>)
	I0729 18:41:57.758485 1078762 status.go:343] host is not running, skipping remaining checks
	I0729 18:41:57.758491 1078762 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:57.758508 1078762 status.go:255] checking status of ha-344156-m03 ...
	I0729 18:41:57.758866 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.758917 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.773814 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36213
	I0729 18:41:57.774193 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.774637 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.774658 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.775011 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.775239 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:41:57.776804 1078762 status.go:330] ha-344156-m03 host status = "Running" (err=<nil>)
	I0729 18:41:57.776823 1078762 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:57.777247 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.777289 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.792177 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
	I0729 18:41:57.792527 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.792983 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.793009 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.793346 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.793538 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:41:57.796133 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:57.796529 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:57.796562 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:57.796719 1078762 host.go:66] Checking if "ha-344156-m03" exists ...
	I0729 18:41:57.797118 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.797154 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.811584 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0729 18:41:57.811955 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.812437 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.812463 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.812779 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.812985 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:41:57.813171 1078762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:57.813197 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:41:57.815738 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:57.816106 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:41:57.816135 1078762 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:41:57.816271 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:41:57.816453 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:41:57.816608 1078762 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:41:57.816755 1078762 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:41:57.902345 1078762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:57.927378 1078762 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:41:57.927413 1078762 api_server.go:166] Checking apiserver status ...
	I0729 18:41:57.927458 1078762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:41:57.942177 1078762 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup
	W0729 18:41:57.952371 1078762 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1518/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:41:57.952416 1078762 ssh_runner.go:195] Run: ls
	I0729 18:41:57.956424 1078762 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:41:57.960723 1078762 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:41:57.960743 1078762 status.go:422] ha-344156-m03 apiserver status = Running (err=<nil>)
	I0729 18:41:57.960752 1078762 status.go:257] ha-344156-m03 status: &{Name:ha-344156-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:41:57.960767 1078762 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:41:57.961083 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.961128 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.978298 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44221
	I0729 18:41:57.978734 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.979219 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.979249 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.979592 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.979794 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:41:57.981240 1078762 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:41:57.981258 1078762 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:57.981585 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:57.981624 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:57.995929 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0729 18:41:57.996354 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:57.996802 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:57.996825 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:57.997131 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:57.997304 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:41:58.000205 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:58.000695 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:58.000729 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:58.000820 1078762 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:41:58.001107 1078762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:41:58.001146 1078762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:41:58.015149 1078762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43283
	I0729 18:41:58.015577 1078762 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:41:58.016040 1078762 main.go:141] libmachine: Using API Version  1
	I0729 18:41:58.016078 1078762 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:41:58.016371 1078762 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:41:58.016547 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:41:58.016722 1078762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:41:58.016740 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:41:58.019208 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:58.019634 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:41:58.019653 1078762 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:41:58.019795 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:41:58.019961 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:41:58.020090 1078762 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:41:58.020253 1078762 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:41:58.107363 1078762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:41:58.122740 1078762 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344156 -n ha-344156
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344156 logs -n 25: (1.339812427s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m03_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m04 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp testdata/cp-test.txt                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m04_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03:/home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m03 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-344156 node stop m02 -v=7                                                    | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-344156 node start m02 -v=7                                                   | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:33:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:33:44.956754 1073226 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:33:44.956879 1073226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:44.956890 1073226 out.go:304] Setting ErrFile to fd 2...
	I0729 18:33:44.956895 1073226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:44.957089 1073226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:33:44.957689 1073226 out.go:298] Setting JSON to false
	I0729 18:33:44.958601 1073226 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8177,"bootTime":1722269848,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:33:44.958664 1073226 start.go:139] virtualization: kvm guest
	I0729 18:33:44.962858 1073226 out.go:177] * [ha-344156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:33:44.964191 1073226 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:33:44.964274 1073226 notify.go:220] Checking for updates...
	I0729 18:33:44.966653 1073226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:33:44.967966 1073226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:33:44.969178 1073226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:44.970424 1073226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:33:44.971709 1073226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:33:44.973126 1073226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:33:45.008222 1073226 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 18:33:45.009410 1073226 start.go:297] selected driver: kvm2
	I0729 18:33:45.009421 1073226 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:33:45.009431 1073226 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:33:45.010317 1073226 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:33:45.010430 1073226 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:33:45.025556 1073226 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:33:45.025607 1073226 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:33:45.025866 1073226 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:33:45.025894 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:33:45.025901 1073226 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 18:33:45.025909 1073226 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 18:33:45.025962 1073226 start.go:340] cluster config:
	{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 18:33:45.026050 1073226 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:33:45.027789 1073226 out.go:177] * Starting "ha-344156" primary control-plane node in "ha-344156" cluster
	I0729 18:33:45.028925 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:33:45.028954 1073226 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:33:45.028962 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:33:45.029048 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:33:45.029058 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:33:45.029409 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:33:45.029433 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json: {Name:mkf6d6544dd7aac4d55600f702d47db47308cd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:33:45.029574 1073226 start.go:360] acquireMachinesLock for ha-344156: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:33:45.029602 1073226 start.go:364] duration metric: took 14.977µs to acquireMachinesLock for "ha-344156"
	I0729 18:33:45.029619 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:33:45.029673 1073226 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 18:33:45.031240 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:33:45.031436 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:45.031491 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:45.046106 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0729 18:33:45.046612 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:45.047145 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:33:45.047186 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:45.047512 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:45.047660 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:33:45.047814 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:33:45.047948 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:33:45.047977 1073226 client.go:168] LocalClient.Create starting
	I0729 18:33:45.048010 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:33:45.048044 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:33:45.048059 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:33:45.048139 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:33:45.048161 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:33:45.048171 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:33:45.048193 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:33:45.048206 1073226 main.go:141] libmachine: (ha-344156) Calling .PreCreateCheck
	I0729 18:33:45.048544 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:33:45.048905 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:33:45.048918 1073226 main.go:141] libmachine: (ha-344156) Calling .Create
	I0729 18:33:45.049032 1073226 main.go:141] libmachine: (ha-344156) Creating KVM machine...
	I0729 18:33:45.050208 1073226 main.go:141] libmachine: (ha-344156) DBG | found existing default KVM network
	I0729 18:33:45.050974 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.050809 1073248 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1f0}
	I0729 18:33:45.051004 1073226 main.go:141] libmachine: (ha-344156) DBG | created network xml: 
	I0729 18:33:45.051022 1073226 main.go:141] libmachine: (ha-344156) DBG | <network>
	I0729 18:33:45.051032 1073226 main.go:141] libmachine: (ha-344156) DBG |   <name>mk-ha-344156</name>
	I0729 18:33:45.051049 1073226 main.go:141] libmachine: (ha-344156) DBG |   <dns enable='no'/>
	I0729 18:33:45.051057 1073226 main.go:141] libmachine: (ha-344156) DBG |   
	I0729 18:33:45.051062 1073226 main.go:141] libmachine: (ha-344156) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 18:33:45.051070 1073226 main.go:141] libmachine: (ha-344156) DBG |     <dhcp>
	I0729 18:33:45.051082 1073226 main.go:141] libmachine: (ha-344156) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 18:33:45.051090 1073226 main.go:141] libmachine: (ha-344156) DBG |     </dhcp>
	I0729 18:33:45.051095 1073226 main.go:141] libmachine: (ha-344156) DBG |   </ip>
	I0729 18:33:45.051103 1073226 main.go:141] libmachine: (ha-344156) DBG |   
	I0729 18:33:45.051113 1073226 main.go:141] libmachine: (ha-344156) DBG | </network>
	I0729 18:33:45.051125 1073226 main.go:141] libmachine: (ha-344156) DBG | 
	I0729 18:33:45.055990 1073226 main.go:141] libmachine: (ha-344156) DBG | trying to create private KVM network mk-ha-344156 192.168.39.0/24...
	I0729 18:33:45.121585 1073226 main.go:141] libmachine: (ha-344156) DBG | private KVM network mk-ha-344156 192.168.39.0/24 created
	I0729 18:33:45.121632 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.121561 1073248 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:45.121644 1073226 main.go:141] libmachine: (ha-344156) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 ...
	I0729 18:33:45.121665 1073226 main.go:141] libmachine: (ha-344156) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:33:45.121741 1073226 main.go:141] libmachine: (ha-344156) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:33:45.388910 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.388775 1073248 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa...
	I0729 18:33:45.441787 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.441618 1073248 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/ha-344156.rawdisk...
	I0729 18:33:45.441820 1073226 main.go:141] libmachine: (ha-344156) DBG | Writing magic tar header
	I0729 18:33:45.441868 1073226 main.go:141] libmachine: (ha-344156) DBG | Writing SSH key tar header
	I0729 18:33:45.441930 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:45.441754 1073248 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 ...
	I0729 18:33:45.441949 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156 (perms=drwx------)
	I0729 18:33:45.441967 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:33:45.441986 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:33:45.442015 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156
	I0729 18:33:45.442034 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:33:45.442043 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:33:45.442053 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:33:45.442059 1073226 main.go:141] libmachine: (ha-344156) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:33:45.442068 1073226 main.go:141] libmachine: (ha-344156) Creating domain...
	I0729 18:33:45.442078 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:45.442085 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:33:45.442090 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:33:45.442099 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:33:45.442104 1073226 main.go:141] libmachine: (ha-344156) DBG | Checking permissions on dir: /home
	I0729 18:33:45.442111 1073226 main.go:141] libmachine: (ha-344156) DBG | Skipping /home - not owner
	I0729 18:33:45.443198 1073226 main.go:141] libmachine: (ha-344156) define libvirt domain using xml: 
	I0729 18:33:45.443220 1073226 main.go:141] libmachine: (ha-344156) <domain type='kvm'>
	I0729 18:33:45.443228 1073226 main.go:141] libmachine: (ha-344156)   <name>ha-344156</name>
	I0729 18:33:45.443233 1073226 main.go:141] libmachine: (ha-344156)   <memory unit='MiB'>2200</memory>
	I0729 18:33:45.443246 1073226 main.go:141] libmachine: (ha-344156)   <vcpu>2</vcpu>
	I0729 18:33:45.443261 1073226 main.go:141] libmachine: (ha-344156)   <features>
	I0729 18:33:45.443272 1073226 main.go:141] libmachine: (ha-344156)     <acpi/>
	I0729 18:33:45.443278 1073226 main.go:141] libmachine: (ha-344156)     <apic/>
	I0729 18:33:45.443287 1073226 main.go:141] libmachine: (ha-344156)     <pae/>
	I0729 18:33:45.443298 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443307 1073226 main.go:141] libmachine: (ha-344156)   </features>
	I0729 18:33:45.443318 1073226 main.go:141] libmachine: (ha-344156)   <cpu mode='host-passthrough'>
	I0729 18:33:45.443326 1073226 main.go:141] libmachine: (ha-344156)   
	I0729 18:33:45.443333 1073226 main.go:141] libmachine: (ha-344156)   </cpu>
	I0729 18:33:45.443338 1073226 main.go:141] libmachine: (ha-344156)   <os>
	I0729 18:33:45.443343 1073226 main.go:141] libmachine: (ha-344156)     <type>hvm</type>
	I0729 18:33:45.443348 1073226 main.go:141] libmachine: (ha-344156)     <boot dev='cdrom'/>
	I0729 18:33:45.443355 1073226 main.go:141] libmachine: (ha-344156)     <boot dev='hd'/>
	I0729 18:33:45.443360 1073226 main.go:141] libmachine: (ha-344156)     <bootmenu enable='no'/>
	I0729 18:33:45.443372 1073226 main.go:141] libmachine: (ha-344156)   </os>
	I0729 18:33:45.443449 1073226 main.go:141] libmachine: (ha-344156)   <devices>
	I0729 18:33:45.443474 1073226 main.go:141] libmachine: (ha-344156)     <disk type='file' device='cdrom'>
	I0729 18:33:45.443490 1073226 main.go:141] libmachine: (ha-344156)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/boot2docker.iso'/>
	I0729 18:33:45.443503 1073226 main.go:141] libmachine: (ha-344156)       <target dev='hdc' bus='scsi'/>
	I0729 18:33:45.443513 1073226 main.go:141] libmachine: (ha-344156)       <readonly/>
	I0729 18:33:45.443524 1073226 main.go:141] libmachine: (ha-344156)     </disk>
	I0729 18:33:45.443538 1073226 main.go:141] libmachine: (ha-344156)     <disk type='file' device='disk'>
	I0729 18:33:45.443555 1073226 main.go:141] libmachine: (ha-344156)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:33:45.443572 1073226 main.go:141] libmachine: (ha-344156)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/ha-344156.rawdisk'/>
	I0729 18:33:45.443583 1073226 main.go:141] libmachine: (ha-344156)       <target dev='hda' bus='virtio'/>
	I0729 18:33:45.443592 1073226 main.go:141] libmachine: (ha-344156)     </disk>
	I0729 18:33:45.443604 1073226 main.go:141] libmachine: (ha-344156)     <interface type='network'>
	I0729 18:33:45.443616 1073226 main.go:141] libmachine: (ha-344156)       <source network='mk-ha-344156'/>
	I0729 18:33:45.443631 1073226 main.go:141] libmachine: (ha-344156)       <model type='virtio'/>
	I0729 18:33:45.443643 1073226 main.go:141] libmachine: (ha-344156)     </interface>
	I0729 18:33:45.443653 1073226 main.go:141] libmachine: (ha-344156)     <interface type='network'>
	I0729 18:33:45.443666 1073226 main.go:141] libmachine: (ha-344156)       <source network='default'/>
	I0729 18:33:45.443674 1073226 main.go:141] libmachine: (ha-344156)       <model type='virtio'/>
	I0729 18:33:45.443686 1073226 main.go:141] libmachine: (ha-344156)     </interface>
	I0729 18:33:45.443696 1073226 main.go:141] libmachine: (ha-344156)     <serial type='pty'>
	I0729 18:33:45.443709 1073226 main.go:141] libmachine: (ha-344156)       <target port='0'/>
	I0729 18:33:45.443722 1073226 main.go:141] libmachine: (ha-344156)     </serial>
	I0729 18:33:45.443733 1073226 main.go:141] libmachine: (ha-344156)     <console type='pty'>
	I0729 18:33:45.443750 1073226 main.go:141] libmachine: (ha-344156)       <target type='serial' port='0'/>
	I0729 18:33:45.443759 1073226 main.go:141] libmachine: (ha-344156)     </console>
	I0729 18:33:45.443768 1073226 main.go:141] libmachine: (ha-344156)     <rng model='virtio'>
	I0729 18:33:45.443784 1073226 main.go:141] libmachine: (ha-344156)       <backend model='random'>/dev/random</backend>
	I0729 18:33:45.443795 1073226 main.go:141] libmachine: (ha-344156)     </rng>
	I0729 18:33:45.443805 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443816 1073226 main.go:141] libmachine: (ha-344156)     
	I0729 18:33:45.443825 1073226 main.go:141] libmachine: (ha-344156)   </devices>
	I0729 18:33:45.443834 1073226 main.go:141] libmachine: (ha-344156) </domain>
	I0729 18:33:45.443844 1073226 main.go:141] libmachine: (ha-344156) 
	I0729 18:33:45.448111 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:bd:f4:5c in network default
	I0729 18:33:45.448675 1073226 main.go:141] libmachine: (ha-344156) Ensuring networks are active...
	I0729 18:33:45.448699 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:45.449441 1073226 main.go:141] libmachine: (ha-344156) Ensuring network default is active
	I0729 18:33:45.449740 1073226 main.go:141] libmachine: (ha-344156) Ensuring network mk-ha-344156 is active
	I0729 18:33:45.450303 1073226 main.go:141] libmachine: (ha-344156) Getting domain xml...
	I0729 18:33:45.451048 1073226 main.go:141] libmachine: (ha-344156) Creating domain...
	I0729 18:33:46.632599 1073226 main.go:141] libmachine: (ha-344156) Waiting to get IP...
	I0729 18:33:46.633501 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:46.633943 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:46.633985 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:46.633927 1073248 retry.go:31] will retry after 264.543199ms: waiting for machine to come up
	I0729 18:33:46.900432 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:46.900963 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:46.900993 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:46.900913 1073248 retry.go:31] will retry after 383.267628ms: waiting for machine to come up
	I0729 18:33:47.285434 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:47.285878 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:47.285906 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:47.285831 1073248 retry.go:31] will retry after 486.285941ms: waiting for machine to come up
	I0729 18:33:47.773287 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:47.773679 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:47.773735 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:47.773661 1073248 retry.go:31] will retry after 584.973906ms: waiting for machine to come up
	I0729 18:33:48.360407 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:48.360792 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:48.360815 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:48.360754 1073248 retry.go:31] will retry after 756.105052ms: waiting for machine to come up
	I0729 18:33:49.118682 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:49.119088 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:49.119115 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:49.119052 1073248 retry.go:31] will retry after 664.094058ms: waiting for machine to come up
	I0729 18:33:49.784908 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:49.785276 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:49.785308 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:49.785225 1073248 retry.go:31] will retry after 904.653048ms: waiting for machine to come up
	I0729 18:33:50.691837 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:50.692222 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:50.692253 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:50.692175 1073248 retry.go:31] will retry after 1.274490726s: waiting for machine to come up
	I0729 18:33:51.968520 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:51.968880 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:51.968921 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:51.968858 1073248 retry.go:31] will retry after 1.625342059s: waiting for machine to come up
	I0729 18:33:53.596639 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:53.596976 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:53.597006 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:53.596958 1073248 retry.go:31] will retry after 1.621283615s: waiting for machine to come up
	I0729 18:33:55.219632 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:55.220126 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:55.220156 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:55.220035 1073248 retry.go:31] will retry after 2.839272433s: waiting for machine to come up
	I0729 18:33:58.062920 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:33:58.063299 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:33:58.063350 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:33:58.063254 1073248 retry.go:31] will retry after 3.17863945s: waiting for machine to come up
	I0729 18:34:01.244084 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:01.244458 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find current IP address of domain ha-344156 in network mk-ha-344156
	I0729 18:34:01.244503 1073226 main.go:141] libmachine: (ha-344156) DBG | I0729 18:34:01.244448 1073248 retry.go:31] will retry after 3.552012439s: waiting for machine to come up
	I0729 18:34:04.800153 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.800447 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has current primary IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.800466 1073226 main.go:141] libmachine: (ha-344156) Found IP for machine: 192.168.39.225
	I0729 18:34:04.800474 1073226 main.go:141] libmachine: (ha-344156) Reserving static IP address...
	I0729 18:34:04.800899 1073226 main.go:141] libmachine: (ha-344156) DBG | unable to find host DHCP lease matching {name: "ha-344156", mac: "52:54:00:a1:fc:98", ip: "192.168.39.225"} in network mk-ha-344156
	I0729 18:34:04.870193 1073226 main.go:141] libmachine: (ha-344156) DBG | Getting to WaitForSSH function...
	I0729 18:34:04.870226 1073226 main.go:141] libmachine: (ha-344156) Reserved static IP address: 192.168.39.225
	I0729 18:34:04.870239 1073226 main.go:141] libmachine: (ha-344156) Waiting for SSH to be available...
	I0729 18:34:04.872853 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.873272 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:04.873312 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:04.873410 1073226 main.go:141] libmachine: (ha-344156) DBG | Using SSH client type: external
	I0729 18:34:04.873430 1073226 main.go:141] libmachine: (ha-344156) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa (-rw-------)
	I0729 18:34:04.873457 1073226 main.go:141] libmachine: (ha-344156) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:34:04.873469 1073226 main.go:141] libmachine: (ha-344156) DBG | About to run SSH command:
	I0729 18:34:04.873481 1073226 main.go:141] libmachine: (ha-344156) DBG | exit 0
	I0729 18:34:05.002955 1073226 main.go:141] libmachine: (ha-344156) DBG | SSH cmd err, output: <nil>: 
	I0729 18:34:05.003249 1073226 main.go:141] libmachine: (ha-344156) KVM machine creation complete!
	I0729 18:34:05.003522 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:34:05.004152 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.004340 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.004497 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:34:05.004514 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:05.005599 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:34:05.005610 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:34:05.005615 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:34:05.005621 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.007973 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.008347 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.008368 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.008493 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.008679 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.008817 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.008940 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.009073 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.009308 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.009320 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:34:05.117879 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:05.117908 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:34:05.117918 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.120495 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.120865 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.120901 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.121050 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.121258 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.121459 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.121549 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.121698 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.121888 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.121899 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:34:05.231446 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:34:05.231557 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:34:05.231574 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:34:05.231586 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.231864 1073226 buildroot.go:166] provisioning hostname "ha-344156"
	I0729 18:34:05.231896 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.232058 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.235039 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.235412 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.235435 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.235576 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.235766 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.235905 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.236047 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.236212 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.236374 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.236384 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156 && echo "ha-344156" | sudo tee /etc/hostname
	I0729 18:34:05.361117 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:34:05.361159 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.364342 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.364752 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.364777 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.364946 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.365118 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.365291 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.365469 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.365647 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.365873 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.365898 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:34:05.483985 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:05.484019 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:34:05.484060 1073226 buildroot.go:174] setting up certificates
	I0729 18:34:05.484075 1073226 provision.go:84] configureAuth start
	I0729 18:34:05.484086 1073226 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:34:05.484414 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:05.486738 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.487103 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.487131 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.487226 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.489454 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.489769 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.489791 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.489932 1073226 provision.go:143] copyHostCerts
	I0729 18:34:05.489960 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:05.490007 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:34:05.490023 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:05.490093 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:34:05.490166 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:05.490183 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:34:05.490190 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:05.490212 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:34:05.490250 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:05.490266 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:34:05.490272 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:05.490291 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:34:05.490335 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156 san=[127.0.0.1 192.168.39.225 ha-344156 localhost minikube]
	I0729 18:34:05.532036 1073226 provision.go:177] copyRemoteCerts
	I0729 18:34:05.532097 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:34:05.532122 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.534466 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.534802 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.534827 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.535008 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.535193 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.535371 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.535493 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:05.620611 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:34:05.620695 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 18:34:05.644122 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:34:05.644195 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 18:34:05.666545 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:34:05.666613 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:34:05.689172 1073226 provision.go:87] duration metric: took 205.084167ms to configureAuth
	I0729 18:34:05.689197 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:34:05.689360 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:05.689437 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.691785 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.692147 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.692180 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.692337 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.692538 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.692752 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.692918 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.693107 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:05.693373 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:05.693401 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:34:05.960320 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:34:05.960352 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:34:05.960365 1073226 main.go:141] libmachine: (ha-344156) Calling .GetURL
	I0729 18:34:05.961814 1073226 main.go:141] libmachine: (ha-344156) DBG | Using libvirt version 6000000
	I0729 18:34:05.965439 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.965781 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.965803 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.965975 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:34:05.965992 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:34:05.966002 1073226 client.go:171] duration metric: took 20.918013542s to LocalClient.Create
	I0729 18:34:05.966048 1073226 start.go:167] duration metric: took 20.918085573s to libmachine.API.Create "ha-344156"
	I0729 18:34:05.966060 1073226 start.go:293] postStartSetup for "ha-344156" (driver="kvm2")
	I0729 18:34:05.966074 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:34:05.966100 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:05.966359 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:34:05.966385 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:05.968664 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.968985 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:05.969010 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:05.969120 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:05.969285 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:05.969457 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:05.969573 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.052579 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:34:06.056498 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:34:06.056521 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:34:06.056575 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:34:06.056645 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:34:06.056655 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:34:06.056748 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:34:06.065426 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:06.088580 1073226 start.go:296] duration metric: took 122.504862ms for postStartSetup
	I0729 18:34:06.088626 1073226 main.go:141] libmachine: (ha-344156) Calling .GetConfigRaw
	I0729 18:34:06.089205 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:06.091764 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.092108 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.092128 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.092380 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:06.092592 1073226 start.go:128] duration metric: took 21.062906887s to createHost
	I0729 18:34:06.092623 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.095129 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.095660 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.095694 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.095859 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.096050 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.096211 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.096346 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.096533 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:06.096754 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:34:06.096765 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:34:06.207454 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278046.180028938
	
	I0729 18:34:06.207473 1073226 fix.go:216] guest clock: 1722278046.180028938
	I0729 18:34:06.207480 1073226 fix.go:229] Guest: 2024-07-29 18:34:06.180028938 +0000 UTC Remote: 2024-07-29 18:34:06.092612562 +0000 UTC m=+21.170361798 (delta=87.416376ms)
	I0729 18:34:06.207500 1073226 fix.go:200] guest clock delta is within tolerance: 87.416376ms
	I0729 18:34:06.207506 1073226 start.go:83] releasing machines lock for "ha-344156", held for 21.177894829s
	I0729 18:34:06.207523 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.207808 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:06.210148 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.210520 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.210554 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.210697 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211222 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211386 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:06.211463 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:34:06.211534 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.211595 1073226 ssh_runner.go:195] Run: cat /version.json
	I0729 18:34:06.211618 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:06.214204 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214471 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214508 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.214529 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214710 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.214801 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:06.214837 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:06.214869 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.215020 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:06.215039 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.215222 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:06.215266 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.215480 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:06.215621 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:06.313765 1073226 ssh_runner.go:195] Run: systemctl --version
	I0729 18:34:06.319503 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:34:06.485096 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:34:06.490916 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:34:06.490981 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:34:06.506313 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:34:06.506334 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:34:06.506394 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:34:06.522531 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:34:06.535576 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:34:06.535636 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:34:06.549116 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:34:06.561985 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:34:06.671576 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:34:06.831892 1073226 docker.go:233] disabling docker service ...
	I0729 18:34:06.831982 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:34:06.845723 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:34:06.857876 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:34:06.973209 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:34:07.092831 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:34:07.106430 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:34:07.124072 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:34:07.124150 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.133862 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:34:07.133943 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.143441 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.153162 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.162566 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:34:07.172440 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.182024 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.198170 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:07.207822 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:34:07.216435 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:34:07.216495 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:34:07.228514 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:34:07.237027 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:34:07.354002 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:34:07.481722 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:34:07.481806 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:34:07.486473 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:34:07.486542 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:34:07.490123 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:34:07.528480 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:34:07.528552 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:34:07.555165 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:34:07.587500 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:34:07.588706 1073226 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:34:07.591393 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:07.591687 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:07.591710 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:07.591893 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:34:07.595977 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:34:07.608866 1073226 kubeadm.go:883] updating cluster {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:34:07.608987 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:34:07.609053 1073226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:34:07.643225 1073226 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 18:34:07.643297 1073226 ssh_runner.go:195] Run: which lz4
	I0729 18:34:07.647020 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 18:34:07.647116 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 18:34:07.650921 1073226 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 18:34:07.650938 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 18:34:09.035778 1073226 crio.go:462] duration metric: took 1.388694553s to copy over tarball
	I0729 18:34:09.035850 1073226 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 18:34:11.118456 1073226 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.082571242s)
	I0729 18:34:11.118502 1073226 crio.go:469] duration metric: took 2.082695207s to extract the tarball
	I0729 18:34:11.118511 1073226 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 18:34:11.156422 1073226 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:34:11.201237 1073226 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:34:11.201262 1073226 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:34:11.201271 1073226 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.30.3 crio true true} ...
	I0729 18:34:11.201394 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:34:11.201457 1073226 ssh_runner.go:195] Run: crio config
	I0729 18:34:11.247713 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:34:11.247735 1073226 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 18:34:11.247748 1073226 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:34:11.247772 1073226 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344156 NodeName:ha-344156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:34:11.247921 1073226 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344156"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:34:11.247947 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:34:11.247988 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:34:11.265337 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:34:11.265470 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:34:11.265533 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:34:11.275261 1073226 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:34:11.275332 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:34:11.284505 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:34:11.299964 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:34:11.315409 1073226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:34:11.331098 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 18:34:11.346943 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:34:11.350618 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:34:11.362526 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:34:11.497774 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:34:11.515028 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.225
	I0729 18:34:11.515052 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:34:11.515074 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.515269 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:34:11.515321 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:34:11.515334 1073226 certs.go:256] generating profile certs ...
	I0729 18:34:11.515399 1073226 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:34:11.515417 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt with IP's: []
	I0729 18:34:11.629698 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt ...
	I0729 18:34:11.629729 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt: {Name:mkcf0c8c421e3bc745f4d659be88beb13d3c52c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.629896 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key ...
	I0729 18:34:11.629907 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key: {Name:mk2ae492368446d4d6f640a1412db71e679b6a4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.629979 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41
	I0729 18:34:11.629994 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.254]
	I0729 18:34:11.780702 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 ...
	I0729 18:34:11.780733 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41: {Name:mk991287ed1b0820e95f5e1a7369781640893f3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.780919 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41 ...
	I0729 18:34:11.780938 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41: {Name:mkbed67947aaf2a97af660c4e19dee0b6f97094e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.781034 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.bf0d8a41 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:34:11.781171 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.bf0d8a41 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:34:11.781264 1073226 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:34:11.781286 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt with IP's: []
	I0729 18:34:11.881219 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt ...
	I0729 18:34:11.881249 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt: {Name:mkb3a421c339103c151b47edbb3d670b9b496119 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.881438 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key ...
	I0729 18:34:11.881456 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key: {Name:mk01aa350b22815cf8b5491d5ee4dc3c4eb9ac9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:11.881548 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:34:11.881572 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:34:11.881590 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:34:11.881614 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:34:11.881632 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:34:11.881649 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:34:11.881662 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:34:11.881677 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:34:11.881743 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:34:11.881792 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:34:11.881806 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:34:11.881838 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:34:11.881891 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:34:11.881936 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:34:11.881990 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:11.882036 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:34:11.882057 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:11.882076 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:34:11.882749 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:34:11.908029 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:34:11.931249 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:34:11.953266 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:34:11.975192 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 18:34:11.997106 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:34:12.018602 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:34:12.040428 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:34:12.062433 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:34:12.084409 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:34:12.106518 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:34:12.128525 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:34:12.144283 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:34:12.149798 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:34:12.160166 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.164365 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.164420 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:34:12.170275 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:34:12.181035 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:34:12.192184 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.196929 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.196984 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:34:12.202832 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:34:12.213379 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:34:12.223777 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.228130 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.228177 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:34:12.233665 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:34:12.244037 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:34:12.247824 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:34:12.247888 1073226 kubeadm.go:392] StartCluster: {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:34:12.247995 1073226 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:34:12.248053 1073226 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:34:12.283862 1073226 cri.go:89] found id: ""
	I0729 18:34:12.283951 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 18:34:12.296914 1073226 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 18:34:12.307181 1073226 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 18:34:12.324602 1073226 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 18:34:12.324618 1073226 kubeadm.go:157] found existing configuration files:
	
	I0729 18:34:12.324657 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 18:34:12.333200 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 18:34:12.333246 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 18:34:12.342145 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 18:34:12.355917 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 18:34:12.355962 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 18:34:12.370171 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 18:34:12.379020 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 18:34:12.379056 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 18:34:12.387989 1073226 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 18:34:12.396846 1073226 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 18:34:12.396874 1073226 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 18:34:12.405760 1073226 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 18:34:12.642376 1073226 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 18:34:23.597640 1073226 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 18:34:23.597724 1073226 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 18:34:23.597787 1073226 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 18:34:23.597867 1073226 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 18:34:23.597982 1073226 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 18:34:23.598060 1073226 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 18:34:23.599582 1073226 out.go:204]   - Generating certificates and keys ...
	I0729 18:34:23.599687 1073226 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 18:34:23.599784 1073226 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 18:34:23.599878 1073226 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 18:34:23.599960 1073226 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 18:34:23.600047 1073226 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 18:34:23.600119 1073226 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 18:34:23.600194 1073226 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 18:34:23.600359 1073226 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-344156 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0729 18:34:23.600430 1073226 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 18:34:23.600585 1073226 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-344156 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I0729 18:34:23.600673 1073226 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 18:34:23.600760 1073226 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 18:34:23.600819 1073226 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 18:34:23.600908 1073226 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 18:34:23.601008 1073226 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 18:34:23.601098 1073226 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 18:34:23.601147 1073226 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 18:34:23.601205 1073226 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 18:34:23.601248 1073226 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 18:34:23.601350 1073226 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 18:34:23.601445 1073226 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 18:34:23.603392 1073226 out.go:204]   - Booting up control plane ...
	I0729 18:34:23.603489 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 18:34:23.603575 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 18:34:23.603656 1073226 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 18:34:23.603772 1073226 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 18:34:23.603909 1073226 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 18:34:23.603952 1073226 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 18:34:23.604112 1073226 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 18:34:23.604228 1073226 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 18:34:23.604289 1073226 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.400017ms
	I0729 18:34:23.604392 1073226 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 18:34:23.604479 1073226 kubeadm.go:310] [api-check] The API server is healthy after 5.863407237s
	I0729 18:34:23.604628 1073226 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 18:34:23.604806 1073226 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 18:34:23.604865 1073226 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 18:34:23.605010 1073226 kubeadm.go:310] [mark-control-plane] Marking the node ha-344156 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 18:34:23.605056 1073226 kubeadm.go:310] [bootstrap-token] Using token: sgseks.zyny4ici27dvxrv8
	I0729 18:34:23.606101 1073226 out.go:204]   - Configuring RBAC rules ...
	I0729 18:34:23.606191 1073226 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 18:34:23.606263 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 18:34:23.606390 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 18:34:23.606505 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 18:34:23.606642 1073226 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 18:34:23.606756 1073226 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 18:34:23.606883 1073226 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 18:34:23.606921 1073226 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 18:34:23.606964 1073226 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 18:34:23.606970 1073226 kubeadm.go:310] 
	I0729 18:34:23.607037 1073226 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 18:34:23.607052 1073226 kubeadm.go:310] 
	I0729 18:34:23.607114 1073226 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 18:34:23.607120 1073226 kubeadm.go:310] 
	I0729 18:34:23.607158 1073226 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 18:34:23.607215 1073226 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 18:34:23.607276 1073226 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 18:34:23.607282 1073226 kubeadm.go:310] 
	I0729 18:34:23.607325 1073226 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 18:34:23.607338 1073226 kubeadm.go:310] 
	I0729 18:34:23.607377 1073226 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 18:34:23.607383 1073226 kubeadm.go:310] 
	I0729 18:34:23.607444 1073226 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 18:34:23.607509 1073226 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 18:34:23.607565 1073226 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 18:34:23.607571 1073226 kubeadm.go:310] 
	I0729 18:34:23.607639 1073226 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 18:34:23.607714 1073226 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 18:34:23.607720 1073226 kubeadm.go:310] 
	I0729 18:34:23.607806 1073226 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sgseks.zyny4ici27dvxrv8 \
	I0729 18:34:23.607922 1073226 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 18:34:23.607947 1073226 kubeadm.go:310] 	--control-plane 
	I0729 18:34:23.607955 1073226 kubeadm.go:310] 
	I0729 18:34:23.608040 1073226 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 18:34:23.608049 1073226 kubeadm.go:310] 
	I0729 18:34:23.608152 1073226 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sgseks.zyny4ici27dvxrv8 \
	I0729 18:34:23.608270 1073226 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 18:34:23.608289 1073226 cni.go:84] Creating CNI manager for ""
	I0729 18:34:23.608298 1073226 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 18:34:23.609560 1073226 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 18:34:23.610663 1073226 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 18:34:23.616220 1073226 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 18:34:23.616237 1073226 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 18:34:23.633181 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 18:34:23.949245 1073226 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 18:34:23.949335 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:23.949384 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156 minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=true
	I0729 18:34:23.964512 1073226 ops.go:34] apiserver oom_adj: -16
	I0729 18:34:24.041830 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:24.542767 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:25.042127 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:25.542523 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:26.042274 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:26.541910 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:27.041923 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:27.542326 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:28.042650 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:28.541905 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:29.042202 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:29.542480 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:30.042223 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:30.542488 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:31.042271 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:31.542196 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:32.042332 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:32.542177 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:33.041866 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:33.542092 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:34.042559 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:34.542475 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.042780 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.542015 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 18:34:35.636835 1073226 kubeadm.go:1113] duration metric: took 11.687570186s to wait for elevateKubeSystemPrivileges
	I0729 18:34:35.636876 1073226 kubeadm.go:394] duration metric: took 23.388999178s to StartCluster
	I0729 18:34:35.636899 1073226 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:35.636988 1073226 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:34:35.637720 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:34:35.637945 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 18:34:35.637959 1073226 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 18:34:35.637999 1073226 addons.go:69] Setting storage-provisioner=true in profile "ha-344156"
	I0729 18:34:35.638027 1073226 addons.go:234] Setting addon storage-provisioner=true in "ha-344156"
	I0729 18:34:35.638035 1073226 addons.go:69] Setting default-storageclass=true in profile "ha-344156"
	I0729 18:34:35.637941 1073226 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:34:35.638058 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:34:35.638061 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:34:35.638094 1073226 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-344156"
	I0729 18:34:35.638151 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:35.638426 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.638466 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.638545 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.638576 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.653885 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44845
	I0729 18:34:35.653893 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46327
	I0729 18:34:35.654390 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.654425 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.654907 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.654927 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.655049 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.655074 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.655286 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.655389 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.655484 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.655952 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.655988 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.657694 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:34:35.658052 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 18:34:35.658604 1073226 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 18:34:35.658972 1073226 addons.go:234] Setting addon default-storageclass=true in "ha-344156"
	I0729 18:34:35.659030 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:34:35.659406 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.659441 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.670770 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0729 18:34:35.671190 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.671657 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.671679 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.672006 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.672218 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.674044 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:35.674070 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33855
	I0729 18:34:35.674454 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.674892 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.674912 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.675242 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.675794 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:35.675823 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:35.676169 1073226 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 18:34:35.677540 1073226 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:34:35.677564 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 18:34:35.677579 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:35.680423 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.680829 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:35.680855 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.681019 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:35.681182 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:35.681340 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:35.681482 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:35.693141 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I0729 18:34:35.693524 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:35.694037 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:35.694063 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:35.694420 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:35.694615 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:34:35.696046 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:34:35.696239 1073226 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 18:34:35.696252 1073226 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 18:34:35.696268 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:34:35.698730 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.699174 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:34:35.699203 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:34:35.699389 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:34:35.699579 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:34:35.699740 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:34:35.699898 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:34:35.758239 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 18:34:35.797694 1073226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 18:34:35.906121 1073226 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 18:34:36.202968 1073226 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 18:34:36.547904 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.547929 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.547928 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.547951 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548258 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548354 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548368 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.548370 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548384 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548392 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548409 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.548335 1073226 main.go:141] libmachine: (ha-344156) DBG | Closing plugin on server side
	I0729 18:34:36.548438 1073226 main.go:141] libmachine: (ha-344156) DBG | Closing plugin on server side
	I0729 18:34:36.548422 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.548658 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548671 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.548914 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.548943 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.549087 1073226 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 18:34:36.549097 1073226 round_trippers.go:469] Request Headers:
	I0729 18:34:36.549107 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:34:36.549116 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:34:36.565136 1073226 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0729 18:34:36.565928 1073226 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 18:34:36.565947 1073226 round_trippers.go:469] Request Headers:
	I0729 18:34:36.565954 1073226 round_trippers.go:473]     Content-Type: application/json
	I0729 18:34:36.565958 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:34:36.565961 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:34:36.579656 1073226 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 18:34:36.579850 1073226 main.go:141] libmachine: Making call to close driver server
	I0729 18:34:36.579864 1073226 main.go:141] libmachine: (ha-344156) Calling .Close
	I0729 18:34:36.580137 1073226 main.go:141] libmachine: Successfully made call to close driver server
	I0729 18:34:36.580157 1073226 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 18:34:36.581895 1073226 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 18:34:36.583107 1073226 addons.go:510] duration metric: took 945.141264ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 18:34:36.583153 1073226 start.go:246] waiting for cluster config update ...
	I0729 18:34:36.583168 1073226 start.go:255] writing updated cluster config ...
	I0729 18:34:36.584831 1073226 out.go:177] 
	I0729 18:34:36.586335 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:36.586439 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:36.589087 1073226 out.go:177] * Starting "ha-344156-m02" control-plane node in "ha-344156" cluster
	I0729 18:34:36.590510 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:34:36.590538 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:34:36.590631 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:34:36.590648 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:34:36.590741 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:36.590953 1073226 start.go:360] acquireMachinesLock for ha-344156-m02: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:34:36.591016 1073226 start.go:364] duration metric: took 36.328µs to acquireMachinesLock for "ha-344156-m02"
	I0729 18:34:36.591040 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:34:36.591147 1073226 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 18:34:36.592716 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:34:36.592826 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:34:36.592861 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:34:36.608998 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0729 18:34:36.609514 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:34:36.610045 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:34:36.610072 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:34:36.610395 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:34:36.610583 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:36.610750 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:36.610944 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:34:36.610967 1073226 client.go:168] LocalClient.Create starting
	I0729 18:34:36.611005 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:34:36.611043 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:34:36.611065 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:34:36.611139 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:34:36.611166 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:34:36.611181 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:34:36.611207 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:34:36.611218 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .PreCreateCheck
	I0729 18:34:36.611452 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:36.611857 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:34:36.611873 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .Create
	I0729 18:34:36.612019 1073226 main.go:141] libmachine: (ha-344156-m02) Creating KVM machine...
	I0729 18:34:36.613126 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found existing default KVM network
	I0729 18:34:36.613276 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found existing private KVM network mk-ha-344156
	I0729 18:34:36.613436 1073226 main.go:141] libmachine: (ha-344156-m02) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 ...
	I0729 18:34:36.613464 1073226 main.go:141] libmachine: (ha-344156-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:34:36.613536 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.613428 1073621 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:34:36.613626 1073226 main.go:141] libmachine: (ha-344156-m02) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:34:36.890782 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.890652 1073621 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa...
	I0729 18:34:36.976727 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.976623 1073621 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/ha-344156-m02.rawdisk...
	I0729 18:34:36.976763 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Writing magic tar header
	I0729 18:34:36.976777 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Writing SSH key tar header
	I0729 18:34:36.976793 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:36.976756 1073621 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 ...
	I0729 18:34:36.976904 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02
	I0729 18:34:36.976940 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02 (perms=drwx------)
	I0729 18:34:36.976952 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:34:36.976969 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:34:36.976983 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:34:36.976996 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:34:36.977020 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:34:36.977035 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:34:36.977040 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:34:36.977050 1073226 main.go:141] libmachine: (ha-344156-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:34:36.977055 1073226 main.go:141] libmachine: (ha-344156-m02) Creating domain...
	I0729 18:34:36.977061 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:34:36.977070 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:34:36.977076 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Checking permissions on dir: /home
	I0729 18:34:36.977082 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Skipping /home - not owner
	I0729 18:34:36.978014 1073226 main.go:141] libmachine: (ha-344156-m02) define libvirt domain using xml: 
	I0729 18:34:36.978033 1073226 main.go:141] libmachine: (ha-344156-m02) <domain type='kvm'>
	I0729 18:34:36.978043 1073226 main.go:141] libmachine: (ha-344156-m02)   <name>ha-344156-m02</name>
	I0729 18:34:36.978051 1073226 main.go:141] libmachine: (ha-344156-m02)   <memory unit='MiB'>2200</memory>
	I0729 18:34:36.978059 1073226 main.go:141] libmachine: (ha-344156-m02)   <vcpu>2</vcpu>
	I0729 18:34:36.978067 1073226 main.go:141] libmachine: (ha-344156-m02)   <features>
	I0729 18:34:36.978075 1073226 main.go:141] libmachine: (ha-344156-m02)     <acpi/>
	I0729 18:34:36.978080 1073226 main.go:141] libmachine: (ha-344156-m02)     <apic/>
	I0729 18:34:36.978089 1073226 main.go:141] libmachine: (ha-344156-m02)     <pae/>
	I0729 18:34:36.978093 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978098 1073226 main.go:141] libmachine: (ha-344156-m02)   </features>
	I0729 18:34:36.978103 1073226 main.go:141] libmachine: (ha-344156-m02)   <cpu mode='host-passthrough'>
	I0729 18:34:36.978108 1073226 main.go:141] libmachine: (ha-344156-m02)   
	I0729 18:34:36.978114 1073226 main.go:141] libmachine: (ha-344156-m02)   </cpu>
	I0729 18:34:36.978122 1073226 main.go:141] libmachine: (ha-344156-m02)   <os>
	I0729 18:34:36.978131 1073226 main.go:141] libmachine: (ha-344156-m02)     <type>hvm</type>
	I0729 18:34:36.978142 1073226 main.go:141] libmachine: (ha-344156-m02)     <boot dev='cdrom'/>
	I0729 18:34:36.978154 1073226 main.go:141] libmachine: (ha-344156-m02)     <boot dev='hd'/>
	I0729 18:34:36.978162 1073226 main.go:141] libmachine: (ha-344156-m02)     <bootmenu enable='no'/>
	I0729 18:34:36.978166 1073226 main.go:141] libmachine: (ha-344156-m02)   </os>
	I0729 18:34:36.978172 1073226 main.go:141] libmachine: (ha-344156-m02)   <devices>
	I0729 18:34:36.978177 1073226 main.go:141] libmachine: (ha-344156-m02)     <disk type='file' device='cdrom'>
	I0729 18:34:36.978191 1073226 main.go:141] libmachine: (ha-344156-m02)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/boot2docker.iso'/>
	I0729 18:34:36.978202 1073226 main.go:141] libmachine: (ha-344156-m02)       <target dev='hdc' bus='scsi'/>
	I0729 18:34:36.978212 1073226 main.go:141] libmachine: (ha-344156-m02)       <readonly/>
	I0729 18:34:36.978220 1073226 main.go:141] libmachine: (ha-344156-m02)     </disk>
	I0729 18:34:36.978243 1073226 main.go:141] libmachine: (ha-344156-m02)     <disk type='file' device='disk'>
	I0729 18:34:36.978257 1073226 main.go:141] libmachine: (ha-344156-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:34:36.978267 1073226 main.go:141] libmachine: (ha-344156-m02)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/ha-344156-m02.rawdisk'/>
	I0729 18:34:36.978273 1073226 main.go:141] libmachine: (ha-344156-m02)       <target dev='hda' bus='virtio'/>
	I0729 18:34:36.978279 1073226 main.go:141] libmachine: (ha-344156-m02)     </disk>
	I0729 18:34:36.978285 1073226 main.go:141] libmachine: (ha-344156-m02)     <interface type='network'>
	I0729 18:34:36.978290 1073226 main.go:141] libmachine: (ha-344156-m02)       <source network='mk-ha-344156'/>
	I0729 18:34:36.978326 1073226 main.go:141] libmachine: (ha-344156-m02)       <model type='virtio'/>
	I0729 18:34:36.978348 1073226 main.go:141] libmachine: (ha-344156-m02)     </interface>
	I0729 18:34:36.978359 1073226 main.go:141] libmachine: (ha-344156-m02)     <interface type='network'>
	I0729 18:34:36.978373 1073226 main.go:141] libmachine: (ha-344156-m02)       <source network='default'/>
	I0729 18:34:36.978386 1073226 main.go:141] libmachine: (ha-344156-m02)       <model type='virtio'/>
	I0729 18:34:36.978395 1073226 main.go:141] libmachine: (ha-344156-m02)     </interface>
	I0729 18:34:36.978408 1073226 main.go:141] libmachine: (ha-344156-m02)     <serial type='pty'>
	I0729 18:34:36.978423 1073226 main.go:141] libmachine: (ha-344156-m02)       <target port='0'/>
	I0729 18:34:36.978435 1073226 main.go:141] libmachine: (ha-344156-m02)     </serial>
	I0729 18:34:36.978445 1073226 main.go:141] libmachine: (ha-344156-m02)     <console type='pty'>
	I0729 18:34:36.978457 1073226 main.go:141] libmachine: (ha-344156-m02)       <target type='serial' port='0'/>
	I0729 18:34:36.978467 1073226 main.go:141] libmachine: (ha-344156-m02)     </console>
	I0729 18:34:36.978483 1073226 main.go:141] libmachine: (ha-344156-m02)     <rng model='virtio'>
	I0729 18:34:36.978500 1073226 main.go:141] libmachine: (ha-344156-m02)       <backend model='random'>/dev/random</backend>
	I0729 18:34:36.978528 1073226 main.go:141] libmachine: (ha-344156-m02)     </rng>
	I0729 18:34:36.978546 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978575 1073226 main.go:141] libmachine: (ha-344156-m02)     
	I0729 18:34:36.978594 1073226 main.go:141] libmachine: (ha-344156-m02)   </devices>
	I0729 18:34:36.978604 1073226 main.go:141] libmachine: (ha-344156-m02) </domain>
	I0729 18:34:36.978614 1073226 main.go:141] libmachine: (ha-344156-m02) 
	I0729 18:34:36.985387 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:ad:7d:7c in network default
	I0729 18:34:36.985986 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring networks are active...
	I0729 18:34:36.986005 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:36.986742 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring network default is active
	I0729 18:34:36.987104 1073226 main.go:141] libmachine: (ha-344156-m02) Ensuring network mk-ha-344156 is active
	I0729 18:34:36.987489 1073226 main.go:141] libmachine: (ha-344156-m02) Getting domain xml...
	I0729 18:34:36.988159 1073226 main.go:141] libmachine: (ha-344156-m02) Creating domain...
	I0729 18:34:38.215213 1073226 main.go:141] libmachine: (ha-344156-m02) Waiting to get IP...
	I0729 18:34:38.216178 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.216692 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.216724 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.216663 1073621 retry.go:31] will retry after 192.743587ms: waiting for machine to come up
	I0729 18:34:38.411270 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.411730 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.411758 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.411691 1073621 retry.go:31] will retry after 325.808277ms: waiting for machine to come up
	I0729 18:34:38.739389 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:38.739828 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:38.739855 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:38.739792 1073621 retry.go:31] will retry after 424.809383ms: waiting for machine to come up
	I0729 18:34:39.165984 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:39.166362 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:39.166397 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:39.166326 1073621 retry.go:31] will retry after 605.465441ms: waiting for machine to come up
	I0729 18:34:39.773004 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:39.773530 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:39.773562 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:39.773460 1073621 retry.go:31] will retry after 703.376547ms: waiting for machine to come up
	I0729 18:34:40.478241 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:40.478719 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:40.478750 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:40.478660 1073621 retry.go:31] will retry after 880.682621ms: waiting for machine to come up
	I0729 18:34:41.360556 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:41.360958 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:41.360987 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:41.360915 1073621 retry.go:31] will retry after 995.983878ms: waiting for machine to come up
	I0729 18:34:42.358221 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:42.358641 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:42.358662 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:42.358599 1073621 retry.go:31] will retry after 1.181830881s: waiting for machine to come up
	I0729 18:34:43.541916 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:43.542421 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:43.542481 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:43.542305 1073621 retry.go:31] will retry after 1.736643534s: waiting for machine to come up
	I0729 18:34:45.281194 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:45.281674 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:45.281705 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:45.281608 1073621 retry.go:31] will retry after 2.275726311s: waiting for machine to come up
	I0729 18:34:47.558887 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:47.559306 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:47.559329 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:47.559257 1073621 retry.go:31] will retry after 2.748225942s: waiting for machine to come up
	I0729 18:34:50.308738 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:50.309228 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:50.309259 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:50.309176 1073621 retry.go:31] will retry after 2.570592713s: waiting for machine to come up
	I0729 18:34:52.882040 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:52.882452 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find current IP address of domain ha-344156-m02 in network mk-ha-344156
	I0729 18:34:52.882481 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | I0729 18:34:52.882399 1073621 retry.go:31] will retry after 4.385805767s: waiting for machine to come up
	I0729 18:34:57.269448 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.269863 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has current primary IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.269886 1073226 main.go:141] libmachine: (ha-344156-m02) Found IP for machine: 192.168.39.249
	I0729 18:34:57.269900 1073226 main.go:141] libmachine: (ha-344156-m02) Reserving static IP address...
	I0729 18:34:57.270257 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | unable to find host DHCP lease matching {name: "ha-344156-m02", mac: "52:54:00:99:a3:97", ip: "192.168.39.249"} in network mk-ha-344156
	I0729 18:34:57.341185 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Getting to WaitForSSH function...
	I0729 18:34:57.341216 1073226 main.go:141] libmachine: (ha-344156-m02) Reserved static IP address: 192.168.39.249
	I0729 18:34:57.341229 1073226 main.go:141] libmachine: (ha-344156-m02) Waiting for SSH to be available...
	I0729 18:34:57.343817 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.344238 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.344263 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.344302 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using SSH client type: external
	I0729 18:34:57.344318 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa (-rw-------)
	I0729 18:34:57.344433 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:34:57.344453 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | About to run SSH command:
	I0729 18:34:57.344471 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | exit 0
	I0729 18:34:57.467216 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 18:34:57.467478 1073226 main.go:141] libmachine: (ha-344156-m02) KVM machine creation complete!
	I0729 18:34:57.467831 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:57.468411 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:57.468614 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:57.468782 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:34:57.468798 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:34:57.470034 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:34:57.470047 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:34:57.470052 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:34:57.470058 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.472308 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.472707 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.472737 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.472874 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.473052 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.473226 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.473376 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.473544 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.473850 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.473870 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:34:57.574390 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:57.574414 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:34:57.574422 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.577605 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.578081 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.578112 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.578295 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.578503 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.578666 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.578882 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.579067 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.579283 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.579301 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:34:57.679800 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:34:57.679911 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:34:57.679927 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:34:57.679939 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.680207 1073226 buildroot.go:166] provisioning hostname "ha-344156-m02"
	I0729 18:34:57.680236 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.680413 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.683173 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.683473 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.683505 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.683632 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.683814 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.683983 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.684140 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.684304 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.684506 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.684522 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156-m02 && echo "ha-344156-m02" | sudo tee /etc/hostname
	I0729 18:34:57.801847 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156-m02
	
	I0729 18:34:57.801875 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.804836 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.805144 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.805174 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.805372 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:57.805580 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.805744 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:57.805899 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:57.806074 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:57.806247 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:57.806263 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:34:57.916595 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:34:57.916639 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:34:57.916666 1073226 buildroot.go:174] setting up certificates
	I0729 18:34:57.916682 1073226 provision.go:84] configureAuth start
	I0729 18:34:57.916700 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetMachineName
	I0729 18:34:57.916987 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:57.919519 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.919905 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.919934 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.920094 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:57.923248 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.923583 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:57.923611 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:57.923753 1073226 provision.go:143] copyHostCerts
	I0729 18:34:57.923793 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:57.923826 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:34:57.923835 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:34:57.923893 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:34:57.923963 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:57.923981 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:34:57.923987 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:34:57.924010 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:34:57.924061 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:57.924078 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:34:57.924084 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:34:57.924106 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:34:57.924151 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156-m02 san=[127.0.0.1 192.168.39.249 ha-344156-m02 localhost minikube]
	I0729 18:34:58.007732 1073226 provision.go:177] copyRemoteCerts
	I0729 18:34:58.007794 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:34:58.007818 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.010265 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.010569 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.010600 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.010743 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.010919 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.011057 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.011162 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.093105 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:34:58.093165 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:34:58.120080 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:34:58.120142 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:34:58.142767 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:34:58.142841 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:34:58.167082 1073226 provision.go:87] duration metric: took 250.381441ms to configureAuth
	I0729 18:34:58.167113 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:34:58.167317 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:34:58.167404 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.170147 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.170599 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.170629 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.170790 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.170976 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.171123 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.171278 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.171436 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:58.171657 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:58.171677 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:34:58.450547 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:34:58.450578 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:34:58.450594 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetURL
	I0729 18:34:58.451880 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | Using libvirt version 6000000
	I0729 18:34:58.453891 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.454185 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.454209 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.454431 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:34:58.454446 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:34:58.454455 1073226 client.go:171] duration metric: took 21.843478371s to LocalClient.Create
	I0729 18:34:58.454477 1073226 start.go:167] duration metric: took 21.843534449s to libmachine.API.Create "ha-344156"
	I0729 18:34:58.454487 1073226 start.go:293] postStartSetup for "ha-344156-m02" (driver="kvm2")
	I0729 18:34:58.454521 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:34:58.454545 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.454878 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:34:58.454912 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.457207 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.457533 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.457561 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.457762 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.457941 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.458086 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.458217 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.536704 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:34:58.540832 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:34:58.540865 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:34:58.540932 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:34:58.541027 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:34:58.541042 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:34:58.541164 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:34:58.550386 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:34:58.576937 1073226 start.go:296] duration metric: took 122.422943ms for postStartSetup
	I0729 18:34:58.576983 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetConfigRaw
	I0729 18:34:58.577572 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:58.580120 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.580438 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.580458 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.580741 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:34:58.580948 1073226 start.go:128] duration metric: took 21.98978895s to createHost
	I0729 18:34:58.580973 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.582978 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.583259 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.583289 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.583392 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.583573 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.583741 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.583904 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.584047 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:34:58.584220 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 18:34:58.584231 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:34:58.683224 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278098.640970006
	
	I0729 18:34:58.683248 1073226 fix.go:216] guest clock: 1722278098.640970006
	I0729 18:34:58.683257 1073226 fix.go:229] Guest: 2024-07-29 18:34:58.640970006 +0000 UTC Remote: 2024-07-29 18:34:58.580960916 +0000 UTC m=+73.658710151 (delta=60.00909ms)
	I0729 18:34:58.683277 1073226 fix.go:200] guest clock delta is within tolerance: 60.00909ms
	I0729 18:34:58.683284 1073226 start.go:83] releasing machines lock for "ha-344156-m02", held for 22.092255822s
	I0729 18:34:58.683307 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.683587 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:34:58.685992 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.686308 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.686328 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.688553 1073226 out.go:177] * Found network options:
	I0729 18:34:58.689750 1073226 out.go:177]   - NO_PROXY=192.168.39.225
	W0729 18:34:58.690882 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:34:58.690931 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691396 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691579 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:34:58.691685 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:34:58.691733 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	W0729 18:34:58.691808 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:34:58.691871 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:34:58.691888 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:34:58.694434 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694727 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694770 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.694795 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.694952 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.695143 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.695171 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:34:58.695191 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:34:58.695329 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.695337 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:34:58.695502 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:34:58.695521 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:58.695656 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:34:58.695807 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:34:59.219465 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:34:59.225441 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:34:59.225515 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:34:59.241148 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:34:59.241169 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:34:59.241232 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:34:59.256557 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:34:59.269484 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:34:59.269540 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:34:59.282006 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:34:59.294554 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:34:59.400871 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:34:59.564683 1073226 docker.go:233] disabling docker service ...
	I0729 18:34:59.564767 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:34:59.579222 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:34:59.591663 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:34:59.704596 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:34:59.821936 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:34:59.835475 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:34:59.853364 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:34:59.853431 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.863444 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:34:59.863517 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.873630 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.883352 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.893186 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:34:59.903308 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.913184 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.929630 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:34:59.939504 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:34:59.948482 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:34:59.948533 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:34:59.961412 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:34:59.970429 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:00.077766 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:35:00.211541 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:35:00.211641 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:35:00.216910 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:35:00.216973 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:35:00.221022 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:35:00.261287 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:35:00.261381 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:35:00.289328 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:35:00.319680 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:35:00.321020 1073226 out.go:177]   - env NO_PROXY=192.168.39.225
	I0729 18:35:00.322170 1073226 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:35:00.324901 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:35:00.325237 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:34:51 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:35:00.325266 1073226 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:35:00.325473 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:35:00.329978 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:35:00.342977 1073226 mustload.go:65] Loading cluster: ha-344156
	I0729 18:35:00.343196 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:00.343471 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:00.343503 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:00.358513 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36029
	I0729 18:35:00.359020 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:00.359515 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:00.359539 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:00.359846 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:00.360066 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:35:00.361930 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:35:00.362253 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:00.362280 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:00.377532 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0729 18:35:00.377996 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:00.378492 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:00.378515 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:00.378843 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:00.379084 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:35:00.379273 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.249
	I0729 18:35:00.379286 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:35:00.379301 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.379451 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:35:00.379491 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:35:00.379500 1073226 certs.go:256] generating profile certs ...
	I0729 18:35:00.379570 1073226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:35:00.379593 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660
	I0729 18:35:00.379610 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.254]
	I0729 18:35:00.774632 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 ...
	I0729 18:35:00.774668 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660: {Name:mka4379faa9808b62524de326fea26654f0e9584 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.774866 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660 ...
	I0729 18:35:00.774890 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660: {Name:mk873a2dbb09106f128745397e9a40b735c7faaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:35:00.774974 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.192ac660 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:35:00.775111 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.192ac660 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:35:00.775243 1073226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:35:00.775260 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:35:00.775274 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:35:00.775287 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:35:00.775299 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:35:00.775312 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:35:00.775324 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:35:00.775336 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:35:00.775347 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:35:00.775395 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:35:00.775431 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:35:00.775440 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:35:00.775460 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:35:00.775486 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:35:00.775509 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:35:00.775546 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:35:00.775570 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:00.775584 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:35:00.775596 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:35:00.775631 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:35:00.778502 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:00.778934 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:35:00.778966 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:00.779160 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:35:00.779424 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:35:00.779604 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:35:00.779753 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:35:00.859326 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 18:35:00.865414 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 18:35:00.880001 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 18:35:00.885188 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 18:35:00.899058 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 18:35:00.904149 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 18:35:00.916897 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 18:35:00.921712 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 18:35:00.933138 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 18:35:00.937472 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 18:35:00.952585 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 18:35:00.960661 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 18:35:00.972033 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:35:00.998207 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:35:01.023652 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:35:01.048448 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:35:01.072810 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 18:35:01.097863 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:35:01.122937 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:35:01.148356 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:35:01.173581 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:35:01.198244 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:35:01.222479 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:35:01.247656 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 18:35:01.264416 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 18:35:01.280733 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 18:35:01.297192 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 18:35:01.314487 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 18:35:01.331413 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 18:35:01.348616 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 18:35:01.365867 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:35:01.372062 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:35:01.383291 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.388060 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.388146 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:35:01.394248 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:35:01.406105 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:35:01.417866 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.422767 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.422840 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:35:01.428728 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:35:01.439670 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:35:01.450764 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.455529 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.455604 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:35:01.461465 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:35:01.472517 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:35:01.476780 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:35:01.476847 1073226 kubeadm.go:934] updating node {m02 192.168.39.249 8443 v1.30.3 crio true true} ...
	I0729 18:35:01.476979 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:35:01.477014 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:35:01.477057 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:35:01.495211 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:35:01.495314 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:35:01.495379 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:35:01.505858 1073226 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 18:35:01.505928 1073226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 18:35:01.515830 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 18:35:01.515865 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:35:01.515931 1073226 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 18:35:01.515944 1073226 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 18:35:01.515955 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:35:01.520622 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 18:35:01.520652 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 18:35:02.120951 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:35:02.121045 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:35:02.126785 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 18:35:02.126826 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 18:35:02.540965 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:35:02.557247 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:35:02.557381 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:35:02.561896 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 18:35:02.561940 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 18:35:02.986983 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 18:35:02.997179 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:35:03.016053 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:35:03.033710 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:35:03.050569 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:35:03.054444 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:35:03.068192 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:03.189566 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:35:03.206700 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:35:03.207246 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:03.207305 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:03.223238 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37783
	I0729 18:35:03.223774 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:03.224246 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:03.224272 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:03.224584 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:03.224749 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:35:03.224900 1073226 start.go:317] joinCluster: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:35:03.225007 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 18:35:03.225026 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:35:03.227742 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:03.228194 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:35:03.228222 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:35:03.228394 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:35:03.228577 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:35:03.228726 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:35:03.228880 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:35:03.396212 1073226 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:03.396278 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 51hnye.n9le5n5q8s277ze6 --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m02 --control-plane --apiserver-advertise-address=192.168.39.249 --apiserver-bind-port=8443"
	I0729 18:35:26.187581 1073226 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 51hnye.n9le5n5q8s277ze6 --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m02 --control-plane --apiserver-advertise-address=192.168.39.249 --apiserver-bind-port=8443": (22.791267702s)
	I0729 18:35:26.187627 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 18:35:26.767659 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156-m02 minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=false
	I0729 18:35:26.897710 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344156-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 18:35:27.031283 1073226 start.go:319] duration metric: took 23.806377074s to joinCluster
	I0729 18:35:27.031379 1073226 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:27.031691 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:27.033027 1073226 out.go:177] * Verifying Kubernetes components...
	I0729 18:35:27.034317 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:35:27.279073 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:35:27.342407 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:35:27.342687 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 18:35:27.342759 1073226 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.225:8443
	I0729 18:35:27.343031 1073226 node_ready.go:35] waiting up to 6m0s for node "ha-344156-m02" to be "Ready" ...
	I0729 18:35:27.343138 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:27.343148 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:27.343158 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:27.343163 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:27.360994 1073226 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0729 18:35:27.843923 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:27.843956 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:27.843969 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:27.843974 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:27.847623 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:28.343997 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:28.344026 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:28.344040 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:28.344045 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:28.352801 1073226 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 18:35:28.844215 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:28.844247 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:28.844259 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:28.844266 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:28.850339 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:35:29.343591 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:29.343618 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:29.343630 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:29.343637 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:29.357995 1073226 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0729 18:35:29.358690 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:29.843971 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:29.844002 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:29.844014 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:29.844022 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:29.847440 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:30.344219 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:30.344257 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:30.344278 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:30.344283 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:30.348051 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:30.844120 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:30.844148 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:30.844159 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:30.844165 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:30.847471 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:31.343614 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:31.343641 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:31.343653 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:31.343659 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:31.346467 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:31.844235 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:31.844265 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:31.844274 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:31.844277 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:31.848190 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:31.848997 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:32.343292 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:32.343316 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:32.343325 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:32.343328 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:32.346412 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:32.843249 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:32.843273 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:32.843281 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:32.843285 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:32.846391 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:33.344047 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:33.344071 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:33.344079 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:33.344083 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:33.347588 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:33.843957 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:33.843981 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:33.844021 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:33.844038 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:33.847199 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:34.344104 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:34.344129 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:34.344138 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:34.344141 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:34.347276 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:34.347888 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:34.844224 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:34.844251 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:34.844263 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:34.844268 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:34.849189 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:35.343947 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:35.343972 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:35.343981 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:35.343985 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:35.347216 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:35.844337 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:35.844367 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:35.844379 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:35.844385 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:35.847686 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.343620 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:36.343645 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:36.343653 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:36.343657 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:36.346934 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.844112 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:36.844135 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:36.844143 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:36.844147 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:36.847798 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:36.848524 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:37.343666 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:37.343690 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:37.343724 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:37.343731 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:37.346376 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:37.843326 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:37.843359 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:37.843368 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:37.843375 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:37.846754 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:38.343602 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:38.343628 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:38.343637 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:38.343641 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:38.347271 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:38.844070 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:38.844092 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:38.844100 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:38.844104 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:38.847449 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:39.343623 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:39.343650 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:39.343661 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:39.343665 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:39.347688 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:39.348544 1073226 node_ready.go:53] node "ha-344156-m02" has status "Ready":"False"
	I0729 18:35:39.844024 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:39.844051 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:39.844060 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:39.844064 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:39.850989 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:35:40.343820 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.343852 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.343860 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.343866 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.347050 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:40.844142 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.844162 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.844170 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.844176 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.846892 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.847741 1073226 node_ready.go:49] node "ha-344156-m02" has status "Ready":"True"
	I0729 18:35:40.847770 1073226 node_ready.go:38] duration metric: took 13.504712108s for node "ha-344156-m02" to be "Ready" ...
	I0729 18:35:40.847783 1073226 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:35:40.847869 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:40.847881 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.847892 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.847903 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.852480 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:40.859042 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.859112 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5slmg
	I0729 18:35:40.859120 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.859127 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.859133 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.861457 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.862134 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.862149 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.862156 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.862160 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.864509 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.865097 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.865114 1073226 pod_ready.go:81] duration metric: took 6.050845ms for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.865123 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.865167 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h5h7v
	I0729 18:35:40.865175 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.865182 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.865187 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.867315 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.868152 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.868169 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.868178 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.868182 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.870428 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.870932 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.870953 1073226 pod_ready.go:81] duration metric: took 5.82246ms for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.870963 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.871021 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156
	I0729 18:35:40.871029 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.871035 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.871039 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.872985 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:35:40.873632 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:40.873649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.873659 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.873664 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.875725 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.876220 1073226 pod_ready.go:92] pod "etcd-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.876240 1073226 pod_ready.go:81] duration metric: took 5.266086ms for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.876250 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.876312 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m02
	I0729 18:35:40.876322 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.876340 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.876350 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.878425 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:40.878911 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:40.878925 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:40.878932 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:40.878936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:40.880783 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:35:40.881347 1073226 pod_ready.go:92] pod "etcd-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:40.881367 1073226 pod_ready.go:81] duration metric: took 5.106573ms for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:40.881384 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.044787 1073226 request.go:629] Waited for 163.326535ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:35:41.044902 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:35:41.044914 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.044925 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.044936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.048287 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.244564 1073226 request.go:629] Waited for 195.455065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:41.244639 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:41.244645 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.244654 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.244663 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.247467 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:41.248033 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:41.248054 1073226 pod_ready.go:81] duration metric: took 366.658924ms for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.248063 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.444227 1073226 request.go:629] Waited for 196.048674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:35:41.444340 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:35:41.444355 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.444366 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.444373 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.448042 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.645125 1073226 request.go:629] Waited for 196.090606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:41.645227 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:41.645244 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.645252 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.645257 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.648585 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:41.649210 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:41.649232 1073226 pod_ready.go:81] duration metric: took 401.16141ms for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.649244 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:41.845236 1073226 request.go:629] Waited for 195.912886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:35:41.845322 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:35:41.845329 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:41.845340 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:41.845352 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:41.848685 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.044838 1073226 request.go:629] Waited for 195.409222ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:42.044932 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:42.044941 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.044953 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.044961 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.048095 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.048836 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:42.048854 1073226 pod_ready.go:81] duration metric: took 399.601811ms for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:42.048864 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:42.244952 1073226 request.go:629] Waited for 196.01651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.245027 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.245035 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.245045 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.245077 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.247990 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:42.445074 1073226 request.go:629] Waited for 196.360333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.445158 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.445171 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.445181 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.445187 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.448481 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.644285 1073226 request.go:629] Waited for 95.207061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.644352 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:42.644358 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.644375 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.644381 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.647859 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:42.844975 1073226 request.go:629] Waited for 196.404374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.845055 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:42.845062 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:42.845072 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:42.845081 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:42.848369 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:43.049049 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:35:43.049072 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.049080 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.049085 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.052032 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.245238 1073226 request.go:629] Waited for 192.410971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.245341 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.245350 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.245357 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.245365 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.248043 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.248780 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:43.248800 1073226 pod_ready.go:81] duration metric: took 1.19992974s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.248813 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.445071 1073226 request.go:629] Waited for 196.164201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:35:43.445130 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:35:43.445136 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.445143 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.445149 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.448125 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:35:43.644916 1073226 request.go:629] Waited for 196.090624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.644977 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:43.644984 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.644995 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.645005 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.648537 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:43.648973 1073226 pod_ready.go:92] pod "kube-proxy-4p5r9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:43.648990 1073226 pod_ready.go:81] duration metric: took 400.168446ms for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.648999 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:43.845152 1073226 request.go:629] Waited for 196.062448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:35:43.845216 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:35:43.845223 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:43.845233 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:43.845238 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:43.848381 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.044571 1073226 request.go:629] Waited for 195.363564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.044665 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.044670 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.044678 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.044683 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.048099 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.048940 1073226 pod_ready.go:92] pod "kube-proxy-gp282" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.048959 1073226 pod_ready.go:81] duration metric: took 399.953692ms for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.048969 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.245199 1073226 request.go:629] Waited for 196.135922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:35:44.245280 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:35:44.245289 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.245298 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.245303 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.248683 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.444676 1073226 request.go:629] Waited for 195.372268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.444739 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:35:44.444744 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.444753 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.444757 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.447828 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.448490 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.448512 1073226 pod_ready.go:81] duration metric: took 399.537008ms for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.448523 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.644604 1073226 request.go:629] Waited for 195.98334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:35:44.644666 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:35:44.644673 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.644683 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.644689 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.648755 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:35:44.844787 1073226 request.go:629] Waited for 195.371689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:44.844876 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:35:44.844884 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.844919 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.844936 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.848291 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:44.848940 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:35:44.848962 1073226 pod_ready.go:81] duration metric: took 400.431043ms for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:35:44.848976 1073226 pod_ready.go:38] duration metric: took 4.001172836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:35:44.848999 1073226 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:35:44.849071 1073226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:35:44.865607 1073226 api_server.go:72] duration metric: took 17.834187388s to wait for apiserver process to appear ...
	I0729 18:35:44.865631 1073226 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:35:44.865654 1073226 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0729 18:35:44.870139 1073226 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0729 18:35:44.870279 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/version
	I0729 18:35:44.870292 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:44.870303 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:44.870311 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:44.871142 1073226 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 18:35:44.871254 1073226 api_server.go:141] control plane version: v1.30.3
	I0729 18:35:44.871270 1073226 api_server.go:131] duration metric: took 5.634016ms to wait for apiserver health ...
	I0729 18:35:44.871278 1073226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:35:45.044849 1073226 request.go:629] Waited for 173.431592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.044908 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.044913 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.044921 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.044925 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.050279 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:35:45.054829 1073226 system_pods.go:59] 17 kube-system pods found
	I0729 18:35:45.054873 1073226 system_pods.go:61] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:35:45.054880 1073226 system_pods.go:61] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:35:45.054886 1073226 system_pods.go:61] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:35:45.054892 1073226 system_pods.go:61] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:35:45.054896 1073226 system_pods.go:61] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:35:45.054903 1073226 system_pods.go:61] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:35:45.054906 1073226 system_pods.go:61] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:35:45.054913 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:35:45.054916 1073226 system_pods.go:61] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:35:45.054920 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:35:45.054924 1073226 system_pods.go:61] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:35:45.054930 1073226 system_pods.go:61] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:35:45.054933 1073226 system_pods.go:61] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:35:45.054939 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:35:45.054942 1073226 system_pods.go:61] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:35:45.054945 1073226 system_pods.go:61] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:35:45.054948 1073226 system_pods.go:61] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:35:45.054954 1073226 system_pods.go:74] duration metric: took 183.670778ms to wait for pod list to return data ...
	I0729 18:35:45.054964 1073226 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:35:45.244256 1073226 request.go:629] Waited for 189.211461ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:35:45.244362 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:35:45.244370 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.244382 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.244390 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.247495 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:45.247800 1073226 default_sa.go:45] found service account: "default"
	I0729 18:35:45.247820 1073226 default_sa.go:55] duration metric: took 192.849189ms for default service account to be created ...
	I0729 18:35:45.247832 1073226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:35:45.444710 1073226 request.go:629] Waited for 196.788818ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.444776 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:35:45.444781 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.444789 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.444793 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.450315 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:35:45.454802 1073226 system_pods.go:86] 17 kube-system pods found
	I0729 18:35:45.454833 1073226 system_pods.go:89] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:35:45.454841 1073226 system_pods.go:89] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:35:45.454862 1073226 system_pods.go:89] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:35:45.454868 1073226 system_pods.go:89] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:35:45.454874 1073226 system_pods.go:89] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:35:45.454880 1073226 system_pods.go:89] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:35:45.454887 1073226 system_pods.go:89] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:35:45.454894 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:35:45.454905 1073226 system_pods.go:89] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:35:45.454917 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:35:45.454924 1073226 system_pods.go:89] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:35:45.454931 1073226 system_pods.go:89] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:35:45.454941 1073226 system_pods.go:89] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:35:45.454951 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:35:45.454959 1073226 system_pods.go:89] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:35:45.454964 1073226 system_pods.go:89] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:35:45.454970 1073226 system_pods.go:89] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:35:45.454981 1073226 system_pods.go:126] duration metric: took 207.141096ms to wait for k8s-apps to be running ...
	I0729 18:35:45.454994 1073226 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:35:45.455050 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:35:45.470678 1073226 system_svc.go:56] duration metric: took 15.673314ms WaitForService to wait for kubelet
	I0729 18:35:45.470713 1073226 kubeadm.go:582] duration metric: took 18.439296601s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:35:45.470743 1073226 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:35:45.645135 1073226 request.go:629] Waited for 174.314253ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes
	I0729 18:35:45.645218 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes
	I0729 18:35:45.645224 1073226 round_trippers.go:469] Request Headers:
	I0729 18:35:45.645232 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:35:45.645237 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:35:45.648752 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:35:45.649446 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:35:45.649469 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:35:45.649483 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:35:45.649487 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:35:45.649491 1073226 node_conditions.go:105] duration metric: took 178.742302ms to run NodePressure ...
	I0729 18:35:45.649505 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:35:45.649535 1073226 start.go:255] writing updated cluster config ...
	I0729 18:35:45.651570 1073226 out.go:177] 
	I0729 18:35:45.653014 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:35:45.653099 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:35:45.657777 1073226 out.go:177] * Starting "ha-344156-m03" control-plane node in "ha-344156" cluster
	I0729 18:35:45.658705 1073226 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:35:45.658726 1073226 cache.go:56] Caching tarball of preloaded images
	I0729 18:35:45.658821 1073226 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:35:45.658832 1073226 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:35:45.658936 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:35:45.659094 1073226 start.go:360] acquireMachinesLock for ha-344156-m03: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:35:45.659140 1073226 start.go:364] duration metric: took 26.086µs to acquireMachinesLock for "ha-344156-m03"
	I0729 18:35:45.659164 1073226 start.go:93] Provisioning new machine with config: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:35:45.659253 1073226 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 18:35:45.660493 1073226 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 18:35:45.660595 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:35:45.660635 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:35:45.675860 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0729 18:35:45.676276 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:35:45.676811 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:35:45.676834 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:35:45.677106 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:35:45.677277 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:35:45.677391 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:35:45.677523 1073226 start.go:159] libmachine.API.Create for "ha-344156" (driver="kvm2")
	I0729 18:35:45.677552 1073226 client.go:168] LocalClient.Create starting
	I0729 18:35:45.677583 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 18:35:45.677621 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:35:45.677636 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:35:45.677689 1073226 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 18:35:45.677706 1073226 main.go:141] libmachine: Decoding PEM data...
	I0729 18:35:45.677716 1073226 main.go:141] libmachine: Parsing certificate...
	I0729 18:35:45.677730 1073226 main.go:141] libmachine: Running pre-create checks...
	I0729 18:35:45.677738 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .PreCreateCheck
	I0729 18:35:45.677911 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:35:45.678294 1073226 main.go:141] libmachine: Creating machine...
	I0729 18:35:45.678308 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .Create
	I0729 18:35:45.678425 1073226 main.go:141] libmachine: (ha-344156-m03) Creating KVM machine...
	I0729 18:35:45.679748 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found existing default KVM network
	I0729 18:35:45.679836 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found existing private KVM network mk-ha-344156
	I0729 18:35:45.679964 1073226 main.go:141] libmachine: (ha-344156-m03) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 ...
	I0729 18:35:45.679987 1073226 main.go:141] libmachine: (ha-344156-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:35:45.680076 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:45.679964 1073996 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:35:45.680223 1073226 main.go:141] libmachine: (ha-344156-m03) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 18:35:45.953826 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:45.953719 1073996 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa...
	I0729 18:35:46.074158 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:46.074026 1073996 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/ha-344156-m03.rawdisk...
	I0729 18:35:46.074200 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Writing magic tar header
	I0729 18:35:46.074215 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Writing SSH key tar header
	I0729 18:35:46.074227 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:46.074138 1073996 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 ...
	I0729 18:35:46.074244 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03
	I0729 18:35:46.074344 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03 (perms=drwx------)
	I0729 18:35:46.074374 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 18:35:46.074389 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 18:35:46.074404 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:35:46.074414 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 18:35:46.074426 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 18:35:46.074434 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 18:35:46.074442 1073226 main.go:141] libmachine: (ha-344156-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 18:35:46.074450 1073226 main.go:141] libmachine: (ha-344156-m03) Creating domain...
	I0729 18:35:46.074461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 18:35:46.074469 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 18:35:46.074477 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 18:35:46.074488 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Checking permissions on dir: /home
	I0729 18:35:46.074520 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Skipping /home - not owner
	I0729 18:35:46.075370 1073226 main.go:141] libmachine: (ha-344156-m03) define libvirt domain using xml: 
	I0729 18:35:46.075390 1073226 main.go:141] libmachine: (ha-344156-m03) <domain type='kvm'>
	I0729 18:35:46.075396 1073226 main.go:141] libmachine: (ha-344156-m03)   <name>ha-344156-m03</name>
	I0729 18:35:46.075402 1073226 main.go:141] libmachine: (ha-344156-m03)   <memory unit='MiB'>2200</memory>
	I0729 18:35:46.075410 1073226 main.go:141] libmachine: (ha-344156-m03)   <vcpu>2</vcpu>
	I0729 18:35:46.075421 1073226 main.go:141] libmachine: (ha-344156-m03)   <features>
	I0729 18:35:46.075429 1073226 main.go:141] libmachine: (ha-344156-m03)     <acpi/>
	I0729 18:35:46.075435 1073226 main.go:141] libmachine: (ha-344156-m03)     <apic/>
	I0729 18:35:46.075442 1073226 main.go:141] libmachine: (ha-344156-m03)     <pae/>
	I0729 18:35:46.075448 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075456 1073226 main.go:141] libmachine: (ha-344156-m03)   </features>
	I0729 18:35:46.075464 1073226 main.go:141] libmachine: (ha-344156-m03)   <cpu mode='host-passthrough'>
	I0729 18:35:46.075471 1073226 main.go:141] libmachine: (ha-344156-m03)   
	I0729 18:35:46.075477 1073226 main.go:141] libmachine: (ha-344156-m03)   </cpu>
	I0729 18:35:46.075483 1073226 main.go:141] libmachine: (ha-344156-m03)   <os>
	I0729 18:35:46.075494 1073226 main.go:141] libmachine: (ha-344156-m03)     <type>hvm</type>
	I0729 18:35:46.075503 1073226 main.go:141] libmachine: (ha-344156-m03)     <boot dev='cdrom'/>
	I0729 18:35:46.075512 1073226 main.go:141] libmachine: (ha-344156-m03)     <boot dev='hd'/>
	I0729 18:35:46.075525 1073226 main.go:141] libmachine: (ha-344156-m03)     <bootmenu enable='no'/>
	I0729 18:35:46.075535 1073226 main.go:141] libmachine: (ha-344156-m03)   </os>
	I0729 18:35:46.075540 1073226 main.go:141] libmachine: (ha-344156-m03)   <devices>
	I0729 18:35:46.075556 1073226 main.go:141] libmachine: (ha-344156-m03)     <disk type='file' device='cdrom'>
	I0729 18:35:46.075566 1073226 main.go:141] libmachine: (ha-344156-m03)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/boot2docker.iso'/>
	I0729 18:35:46.075596 1073226 main.go:141] libmachine: (ha-344156-m03)       <target dev='hdc' bus='scsi'/>
	I0729 18:35:46.075618 1073226 main.go:141] libmachine: (ha-344156-m03)       <readonly/>
	I0729 18:35:46.075629 1073226 main.go:141] libmachine: (ha-344156-m03)     </disk>
	I0729 18:35:46.075638 1073226 main.go:141] libmachine: (ha-344156-m03)     <disk type='file' device='disk'>
	I0729 18:35:46.075654 1073226 main.go:141] libmachine: (ha-344156-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 18:35:46.075666 1073226 main.go:141] libmachine: (ha-344156-m03)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/ha-344156-m03.rawdisk'/>
	I0729 18:35:46.075678 1073226 main.go:141] libmachine: (ha-344156-m03)       <target dev='hda' bus='virtio'/>
	I0729 18:35:46.075685 1073226 main.go:141] libmachine: (ha-344156-m03)     </disk>
	I0729 18:35:46.075706 1073226 main.go:141] libmachine: (ha-344156-m03)     <interface type='network'>
	I0729 18:35:46.075722 1073226 main.go:141] libmachine: (ha-344156-m03)       <source network='mk-ha-344156'/>
	I0729 18:35:46.075738 1073226 main.go:141] libmachine: (ha-344156-m03)       <model type='virtio'/>
	I0729 18:35:46.075754 1073226 main.go:141] libmachine: (ha-344156-m03)     </interface>
	I0729 18:35:46.075764 1073226 main.go:141] libmachine: (ha-344156-m03)     <interface type='network'>
	I0729 18:35:46.075775 1073226 main.go:141] libmachine: (ha-344156-m03)       <source network='default'/>
	I0729 18:35:46.075783 1073226 main.go:141] libmachine: (ha-344156-m03)       <model type='virtio'/>
	I0729 18:35:46.075793 1073226 main.go:141] libmachine: (ha-344156-m03)     </interface>
	I0729 18:35:46.075802 1073226 main.go:141] libmachine: (ha-344156-m03)     <serial type='pty'>
	I0729 18:35:46.075812 1073226 main.go:141] libmachine: (ha-344156-m03)       <target port='0'/>
	I0729 18:35:46.075821 1073226 main.go:141] libmachine: (ha-344156-m03)     </serial>
	I0729 18:35:46.075831 1073226 main.go:141] libmachine: (ha-344156-m03)     <console type='pty'>
	I0729 18:35:46.075853 1073226 main.go:141] libmachine: (ha-344156-m03)       <target type='serial' port='0'/>
	I0729 18:35:46.075869 1073226 main.go:141] libmachine: (ha-344156-m03)     </console>
	I0729 18:35:46.075883 1073226 main.go:141] libmachine: (ha-344156-m03)     <rng model='virtio'>
	I0729 18:35:46.075895 1073226 main.go:141] libmachine: (ha-344156-m03)       <backend model='random'>/dev/random</backend>
	I0729 18:35:46.075903 1073226 main.go:141] libmachine: (ha-344156-m03)     </rng>
	I0729 18:35:46.075913 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075921 1073226 main.go:141] libmachine: (ha-344156-m03)     
	I0729 18:35:46.075929 1073226 main.go:141] libmachine: (ha-344156-m03)   </devices>
	I0729 18:35:46.075946 1073226 main.go:141] libmachine: (ha-344156-m03) </domain>
	I0729 18:35:46.075961 1073226 main.go:141] libmachine: (ha-344156-m03) 
	I0729 18:35:46.082480 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:9c:10:53 in network default
	I0729 18:35:46.083122 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring networks are active...
	I0729 18:35:46.083141 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:46.083953 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring network default is active
	I0729 18:35:46.084255 1073226 main.go:141] libmachine: (ha-344156-m03) Ensuring network mk-ha-344156 is active
	I0729 18:35:46.084607 1073226 main.go:141] libmachine: (ha-344156-m03) Getting domain xml...
	I0729 18:35:46.085275 1073226 main.go:141] libmachine: (ha-344156-m03) Creating domain...
	I0729 18:35:47.305641 1073226 main.go:141] libmachine: (ha-344156-m03) Waiting to get IP...
	I0729 18:35:47.306359 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.306773 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.306809 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.306750 1073996 retry.go:31] will retry after 290.792301ms: waiting for machine to come up
	I0729 18:35:47.599494 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.599929 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.599979 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.599871 1073996 retry.go:31] will retry after 323.451262ms: waiting for machine to come up
	I0729 18:35:47.925368 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:47.925857 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:47.925884 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:47.925816 1073996 retry.go:31] will retry after 397.336676ms: waiting for machine to come up
	I0729 18:35:48.325126 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:48.325651 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:48.325681 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:48.325604 1073996 retry.go:31] will retry after 378.992466ms: waiting for machine to come up
	I0729 18:35:48.706215 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:48.706597 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:48.706649 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:48.706565 1073996 retry.go:31] will retry after 709.195134ms: waiting for machine to come up
	I0729 18:35:49.417593 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:49.418035 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:49.418061 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:49.417987 1073996 retry.go:31] will retry after 695.222412ms: waiting for machine to come up
	I0729 18:35:50.114890 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:50.115433 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:50.115489 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:50.115401 1073996 retry.go:31] will retry after 1.162350407s: waiting for machine to come up
	I0729 18:35:51.278969 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:51.279365 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:51.279395 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:51.279308 1073996 retry.go:31] will retry after 1.192041574s: waiting for machine to come up
	I0729 18:35:52.473632 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:52.474049 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:52.474073 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:52.474007 1073996 retry.go:31] will retry after 1.569107876s: waiting for machine to come up
	I0729 18:35:54.045735 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:54.046153 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:54.046178 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:54.046098 1073996 retry.go:31] will retry after 1.434983344s: waiting for machine to come up
	I0729 18:35:55.483034 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:55.483461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:55.483487 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:55.483412 1073996 retry.go:31] will retry after 2.844985256s: waiting for machine to come up
	I0729 18:35:58.331917 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:35:58.332323 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:35:58.332346 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:35:58.332285 1073996 retry.go:31] will retry after 2.425853936s: waiting for machine to come up
	I0729 18:36:00.759858 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:00.760325 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:36:00.760390 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:36:00.760321 1073996 retry.go:31] will retry after 3.160933834s: waiting for machine to come up
	I0729 18:36:03.924027 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:03.924524 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find current IP address of domain ha-344156-m03 in network mk-ha-344156
	I0729 18:36:03.924557 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | I0729 18:36:03.924459 1073996 retry.go:31] will retry after 5.464362473s: waiting for machine to come up
	I0729 18:36:09.392030 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.392593 1073226 main.go:141] libmachine: (ha-344156-m03) Found IP for machine: 192.168.39.148
	I0729 18:36:09.392627 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has current primary IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.392636 1073226 main.go:141] libmachine: (ha-344156-m03) Reserving static IP address...
	I0729 18:36:09.393026 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | unable to find host DHCP lease matching {name: "ha-344156-m03", mac: "52:54:00:49:5c:73", ip: "192.168.39.148"} in network mk-ha-344156
	I0729 18:36:09.465204 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Getting to WaitForSSH function...
	I0729 18:36:09.465241 1073226 main.go:141] libmachine: (ha-344156-m03) Reserved static IP address: 192.168.39.148
	I0729 18:36:09.465292 1073226 main.go:141] libmachine: (ha-344156-m03) Waiting for SSH to be available...
	I0729 18:36:09.468097 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.468632 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.468659 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.468820 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using SSH client type: external
	I0729 18:36:09.468844 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa (-rw-------)
	I0729 18:36:09.468869 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 18:36:09.468880 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | About to run SSH command:
	I0729 18:36:09.468901 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | exit 0
	I0729 18:36:09.590954 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 18:36:09.591259 1073226 main.go:141] libmachine: (ha-344156-m03) KVM machine creation complete!
	I0729 18:36:09.591534 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:36:09.592111 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:09.592340 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:09.592485 1073226 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 18:36:09.592495 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:36:09.593696 1073226 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 18:36:09.593707 1073226 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 18:36:09.593713 1073226 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 18:36:09.593719 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.595771 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.596139 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.596170 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.596321 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.596472 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.596613 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.596753 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.596893 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.597152 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.597166 1073226 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 18:36:09.698186 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:36:09.698210 1073226 main.go:141] libmachine: Detecting the provisioner...
	I0729 18:36:09.698220 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.701105 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.701524 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.701553 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.701787 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.702005 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.702201 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.702371 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.702553 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.702766 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.702782 1073226 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 18:36:09.811747 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 18:36:09.811828 1073226 main.go:141] libmachine: found compatible host: buildroot
	I0729 18:36:09.811838 1073226 main.go:141] libmachine: Provisioning with buildroot...
	I0729 18:36:09.811850 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:09.812116 1073226 buildroot.go:166] provisioning hostname "ha-344156-m03"
	I0729 18:36:09.812151 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:09.812379 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.815003 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.815396 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.815418 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.815610 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.815800 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.815959 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.816102 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.816247 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.816425 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.816437 1073226 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156-m03 && echo "ha-344156-m03" | sudo tee /etc/hostname
	I0729 18:36:09.935241 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156-m03
	
	I0729 18:36:09.935276 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:09.937946 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.938321 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:09.938354 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:09.938619 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:09.938833 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.939058 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:09.939246 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:09.939470 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:09.939710 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:09.939736 1073226 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:36:10.051411 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:36:10.051453 1073226 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:36:10.051489 1073226 buildroot.go:174] setting up certificates
	I0729 18:36:10.051502 1073226 provision.go:84] configureAuth start
	I0729 18:36:10.051511 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetMachineName
	I0729 18:36:10.051848 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.054684 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.055016 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.055054 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.055217 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.057137 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.057527 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.057556 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.057680 1073226 provision.go:143] copyHostCerts
	I0729 18:36:10.057707 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:36:10.057744 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:36:10.057753 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:36:10.057813 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:36:10.057889 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:36:10.057906 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:36:10.057913 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:36:10.057936 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:36:10.057983 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:36:10.058002 1073226 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:36:10.058008 1073226 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:36:10.058028 1073226 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:36:10.058076 1073226 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156-m03 san=[127.0.0.1 192.168.39.148 ha-344156-m03 localhost minikube]
	I0729 18:36:10.121659 1073226 provision.go:177] copyRemoteCerts
	I0729 18:36:10.121734 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:36:10.121766 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.124568 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.124870 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.124901 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.125037 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.125238 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.125421 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.125560 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.206092 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:36:10.206175 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:36:10.230639 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:36:10.230705 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:36:10.255830 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:36:10.255913 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 18:36:10.278579 1073226 provision.go:87] duration metric: took 227.063106ms to configureAuth
	I0729 18:36:10.278610 1073226 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:36:10.278843 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:10.278959 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.281588 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.281999 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.282031 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.282252 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.282454 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.282599 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.282721 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.282898 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:10.283078 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:10.283093 1073226 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:36:10.565817 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:36:10.565847 1073226 main.go:141] libmachine: Checking connection to Docker...
	I0729 18:36:10.565860 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetURL
	I0729 18:36:10.567278 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | Using libvirt version 6000000
	I0729 18:36:10.569461 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.569803 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.569828 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.569973 1073226 main.go:141] libmachine: Docker is up and running!
	I0729 18:36:10.569988 1073226 main.go:141] libmachine: Reticulating splines...
	I0729 18:36:10.570002 1073226 client.go:171] duration metric: took 24.892435886s to LocalClient.Create
	I0729 18:36:10.570028 1073226 start.go:167] duration metric: took 24.89250719s to libmachine.API.Create "ha-344156"
	I0729 18:36:10.570039 1073226 start.go:293] postStartSetup for "ha-344156-m03" (driver="kvm2")
	I0729 18:36:10.570048 1073226 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:36:10.570062 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.570303 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:36:10.570338 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.572305 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.572601 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.572628 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.572765 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.572954 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.573107 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.573249 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.654173 1073226 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:36:10.658770 1073226 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:36:10.658797 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:36:10.658889 1073226 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:36:10.658983 1073226 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:36:10.658999 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:36:10.659116 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:36:10.668794 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:36:10.693362 1073226 start.go:296] duration metric: took 123.306572ms for postStartSetup
	I0729 18:36:10.693429 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetConfigRaw
	I0729 18:36:10.694016 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.696549 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.696902 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.696930 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.697319 1073226 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:36:10.697577 1073226 start.go:128] duration metric: took 25.038311393s to createHost
	I0729 18:36:10.697610 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.700158 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.700583 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.700619 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.700744 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.700911 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.701081 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.701185 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.701326 1073226 main.go:141] libmachine: Using SSH client type: native
	I0729 18:36:10.701553 1073226 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0729 18:36:10.701569 1073226 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:36:10.804004 1073226 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278170.780732198
	
	I0729 18:36:10.804029 1073226 fix.go:216] guest clock: 1722278170.780732198
	I0729 18:36:10.804037 1073226 fix.go:229] Guest: 2024-07-29 18:36:10.780732198 +0000 UTC Remote: 2024-07-29 18:36:10.69759403 +0000 UTC m=+145.775343277 (delta=83.138168ms)
	I0729 18:36:10.804055 1073226 fix.go:200] guest clock delta is within tolerance: 83.138168ms
	I0729 18:36:10.804060 1073226 start.go:83] releasing machines lock for "ha-344156-m03", held for 25.144909226s
	I0729 18:36:10.804081 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.804326 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:10.806889 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.807208 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.807251 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.809192 1073226 out.go:177] * Found network options:
	I0729 18:36:10.810285 1073226 out.go:177]   - NO_PROXY=192.168.39.225,192.168.39.249
	W0729 18:36:10.811261 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 18:36:10.811290 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:36:10.811309 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.811934 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.812130 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:36:10.812232 1073226 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:36:10.812293 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	W0729 18:36:10.812384 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 18:36:10.812413 1073226 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 18:36:10.812491 1073226 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:36:10.812516 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:36:10.815303 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815554 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815791 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.815816 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.815982 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:10.815989 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.816009 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:10.816174 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.816278 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:36:10.816419 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:36:10.816427 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.816657 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:36:10.816670 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:10.816823 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:36:11.047712 1073226 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:36:11.054416 1073226 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:36:11.054489 1073226 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:36:11.070311 1073226 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 18:36:11.070334 1073226 start.go:495] detecting cgroup driver to use...
	I0729 18:36:11.070392 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:36:11.086440 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:36:11.100405 1073226 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:36:11.100463 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:36:11.114617 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:36:11.128823 1073226 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:36:11.254976 1073226 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:36:11.394761 1073226 docker.go:233] disabling docker service ...
	I0729 18:36:11.394843 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:36:11.410240 1073226 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:36:11.423477 1073226 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:36:11.575383 1073226 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:36:11.698095 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:36:11.712681 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:36:11.734684 1073226 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:36:11.734746 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.746693 1073226 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:36:11.746769 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.759354 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.770916 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.782464 1073226 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:36:11.794360 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.805862 1073226 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.824497 1073226 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:36:11.835395 1073226 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:36:11.847483 1073226 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 18:36:11.847553 1073226 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 18:36:11.863665 1073226 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:36:11.875512 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:12.012691 1073226 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:36:12.151992 1073226 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:36:12.152061 1073226 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:36:12.157551 1073226 start.go:563] Will wait 60s for crictl version
	I0729 18:36:12.157617 1073226 ssh_runner.go:195] Run: which crictl
	I0729 18:36:12.161416 1073226 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:36:12.208108 1073226 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:36:12.208196 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:36:12.240111 1073226 ssh_runner.go:195] Run: crio --version
	I0729 18:36:12.273439 1073226 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:36:12.274581 1073226 out.go:177]   - env NO_PROXY=192.168.39.225
	I0729 18:36:12.275678 1073226 out.go:177]   - env NO_PROXY=192.168.39.225,192.168.39.249
	I0729 18:36:12.276772 1073226 main.go:141] libmachine: (ha-344156-m03) Calling .GetIP
	I0729 18:36:12.279346 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:12.279694 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:36:12.279727 1073226 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:36:12.279895 1073226 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:36:12.284123 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:36:12.296562 1073226 mustload.go:65] Loading cluster: ha-344156
	I0729 18:36:12.296800 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:12.297053 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:12.297092 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:12.312266 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0729 18:36:12.312652 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:12.313117 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:12.313140 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:12.313430 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:12.313633 1073226 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:36:12.315177 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:36:12.315475 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:12.315518 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:12.329524 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39017
	I0729 18:36:12.329994 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:12.330435 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:12.330458 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:12.330724 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:12.330898 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:36:12.331048 1073226 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.148
	I0729 18:36:12.331059 1073226 certs.go:194] generating shared ca certs ...
	I0729 18:36:12.331080 1073226 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.331224 1073226 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:36:12.331281 1073226 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:36:12.331296 1073226 certs.go:256] generating profile certs ...
	I0729 18:36:12.331393 1073226 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:36:12.331425 1073226 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418
	I0729 18:36:12.331447 1073226 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.148 192.168.39.254]
	I0729 18:36:12.502377 1073226 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 ...
	I0729 18:36:12.502414 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418: {Name:mkf64b75a70f03795bfd6d7a96d4523858ab030a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.502635 1073226 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418 ...
	I0729 18:36:12.502654 1073226 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418: {Name:mk3458c01cde65378f904989ec6841bd16a376ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:36:12.502768 1073226 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.b23b6418 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:36:12.502980 1073226 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.b23b6418 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:36:12.503199 1073226 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:36:12.503220 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:36:12.503248 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:36:12.503275 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:36:12.503296 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:36:12.503316 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:36:12.503341 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:36:12.503362 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:36:12.503386 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:36:12.503470 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:36:12.503516 1073226 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:36:12.503532 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:36:12.503573 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:36:12.503611 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:36:12.503647 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:36:12.503710 1073226 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:36:12.503754 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:12.503777 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:36:12.503799 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:36:12.503846 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:36:12.506959 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:12.507450 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:36:12.507476 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:12.507686 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:36:12.507911 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:36:12.508121 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:36:12.508282 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:36:12.587185 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 18:36:12.593237 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 18:36:12.604403 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 18:36:12.608634 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 18:36:12.618678 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 18:36:12.622753 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 18:36:12.632519 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 18:36:12.637031 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 18:36:12.647215 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 18:36:12.651997 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 18:36:12.662738 1073226 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 18:36:12.667087 1073226 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0729 18:36:12.677919 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:36:12.703036 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:36:12.728336 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:36:12.753542 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:36:12.778132 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 18:36:12.801928 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 18:36:12.828678 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:36:12.852387 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:36:12.877138 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:36:12.900044 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:36:12.924662 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:36:12.948505 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 18:36:12.967205 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 18:36:12.984557 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 18:36:13.003990 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 18:36:13.020955 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 18:36:13.036776 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0729 18:36:13.052677 1073226 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 18:36:13.068319 1073226 ssh_runner.go:195] Run: openssl version
	I0729 18:36:13.073864 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:36:13.083887 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.088395 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.088444 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:36:13.094253 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:36:13.104695 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:36:13.114896 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.119342 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.119381 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:36:13.124857 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:36:13.135324 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:36:13.145980 1073226 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.150321 1073226 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.150366 1073226 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:36:13.155994 1073226 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:36:13.166865 1073226 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:36:13.170725 1073226 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 18:36:13.170773 1073226 kubeadm.go:934] updating node {m03 192.168.39.148 8443 v1.30.3 crio true true} ...
	I0729 18:36:13.170893 1073226 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:36:13.170928 1073226 kube-vip.go:115] generating kube-vip config ...
	I0729 18:36:13.170960 1073226 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:36:13.185436 1073226 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:36:13.185502 1073226 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:36:13.185557 1073226 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:36:13.195349 1073226 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 18:36:13.195391 1073226 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 18:36:13.205698 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 18:36:13.205710 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 18:36:13.205723 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:36:13.205747 1073226 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 18:36:13.205774 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:36:13.205791 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 18:36:13.205753 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:36:13.205850 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 18:36:13.213576 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 18:36:13.213601 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 18:36:13.244698 1073226 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:36:13.244711 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 18:36:13.244737 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 18:36:13.244820 1073226 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 18:36:13.307051 1073226 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 18:36:13.307095 1073226 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 18:36:14.109979 1073226 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 18:36:14.119412 1073226 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 18:36:14.135869 1073226 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:36:14.152492 1073226 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:36:14.169680 1073226 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:36:14.173965 1073226 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 18:36:14.186500 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:14.321621 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:36:14.339993 1073226 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:36:14.340454 1073226 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:36:14.340500 1073226 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:36:14.358705 1073226 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46797
	I0729 18:36:14.359207 1073226 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:36:14.359749 1073226 main.go:141] libmachine: Using API Version  1
	I0729 18:36:14.359773 1073226 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:36:14.360063 1073226 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:36:14.360273 1073226 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:36:14.360441 1073226 start.go:317] joinCluster: &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:36:14.360567 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 18:36:14.360593 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:36:14.363197 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:14.363585 1073226 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:36:14.363617 1073226 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:36:14.363746 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:36:14.363924 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:36:14.364081 1073226 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:36:14.364235 1073226 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:36:14.527295 1073226 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:36:14.527373 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9asq8d.g7xnumn0cs26swoe --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443"
	I0729 18:36:39.774964 1073226 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9asq8d.g7xnumn0cs26swoe --discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344156-m03 --control-plane --apiserver-advertise-address=192.168.39.148 --apiserver-bind-port=8443": (25.24755456s)
	I0729 18:36:39.775010 1073226 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 18:36:40.493199 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344156-m03 minikube.k8s.io/updated_at=2024_07_29T18_36_40_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=ha-344156 minikube.k8s.io/primary=false
	I0729 18:36:40.617592 1073226 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344156-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 18:36:40.723210 1073226 start.go:319] duration metric: took 26.362761282s to joinCluster
	I0729 18:36:40.723310 1073226 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 18:36:40.723661 1073226 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:36:40.724574 1073226 out.go:177] * Verifying Kubernetes components...
	I0729 18:36:40.725585 1073226 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:36:40.998715 1073226 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:36:41.022884 1073226 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:36:41.023279 1073226 kapi.go:59] client config for ha-344156: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 18:36:41.023393 1073226 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.225:8443
	I0729 18:36:41.023674 1073226 node_ready.go:35] waiting up to 6m0s for node "ha-344156-m03" to be "Ready" ...
	I0729 18:36:41.023775 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:41.023787 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:41.023798 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:41.023807 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:41.028502 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:41.524721 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:41.524747 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:41.524758 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:41.524763 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:41.527803 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:42.024049 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:42.024080 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:42.024093 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:42.024098 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:42.032252 1073226 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 18:36:42.524126 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:42.524149 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:42.524160 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:42.524166 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:42.527284 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:43.024296 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:43.024319 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:43.024328 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:43.024332 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:43.028052 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:43.028658 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:43.524150 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:43.524179 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:43.524191 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:43.524197 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:43.528017 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:44.023860 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:44.023882 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:44.023891 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:44.023895 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:44.026868 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:44.524198 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:44.524226 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:44.524236 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:44.524242 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:44.527518 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.024852 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:45.024873 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:45.024882 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:45.024885 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:45.028084 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.524279 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:45.524302 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:45.524310 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:45.524314 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:45.528299 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:45.528935 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:46.024817 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:46.024839 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:46.024847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:46.024852 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:46.027741 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:46.524767 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:46.524790 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:46.524798 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:46.524802 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:46.528346 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.023858 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:47.023879 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:47.023887 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:47.023891 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:47.027151 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.524914 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:47.524940 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:47.524950 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:47.524954 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:47.528915 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:47.530213 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:48.024626 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:48.024649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:48.024658 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:48.024661 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:48.027707 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:48.524875 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:48.524899 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:48.524911 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:48.524917 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:48.528718 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:49.024685 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:49.024709 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:49.024717 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:49.024721 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:49.028074 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:49.524342 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:49.524367 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:49.524376 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:49.524379 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:49.528239 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:50.023936 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:50.023975 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:50.023984 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:50.023989 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:50.027451 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:50.028057 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:50.524667 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:50.524691 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:50.524700 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:50.524705 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:50.527868 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:51.024138 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:51.024162 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:51.024169 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:51.024175 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:51.027542 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:51.524134 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:51.524160 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:51.524170 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:51.524176 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:51.527707 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.024014 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:52.024038 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:52.024047 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:52.024050 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:52.027157 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.523892 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:52.523915 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:52.523922 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:52.523928 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:52.527406 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:52.528154 1073226 node_ready.go:53] node "ha-344156-m03" has status "Ready":"False"
	I0729 18:36:53.024200 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:53.024226 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:53.024237 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:53.024243 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:53.027644 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:53.524023 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:53.524046 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:53.524054 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:53.524059 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:53.527700 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.024789 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.024821 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.024833 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.024847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.027739 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.028294 1073226 node_ready.go:49] node "ha-344156-m03" has status "Ready":"True"
	I0729 18:36:54.028310 1073226 node_ready.go:38] duration metric: took 13.004619418s for node "ha-344156-m03" to be "Ready" ...
	I0729 18:36:54.028320 1073226 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:36:54.028379 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:54.028387 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.028393 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.028398 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.035148 1073226 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 18:36:54.042767 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.042866 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5slmg
	I0729 18:36:54.042876 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.042883 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.042888 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.045742 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.046573 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.046590 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.046603 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.046609 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.050141 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.051147 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.051168 1073226 pod_ready.go:81] duration metric: took 8.377145ms for pod "coredns-7db6d8ff4d-5slmg" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.051177 1073226 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.051241 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-h5h7v
	I0729 18:36:54.051247 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.051256 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.051262 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.054101 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.054919 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.054934 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.054943 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.054947 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.057383 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.058279 1073226 pod_ready.go:92] pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.058300 1073226 pod_ready.go:81] duration metric: took 7.114199ms for pod "coredns-7db6d8ff4d-h5h7v" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.058312 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.058375 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156
	I0729 18:36:54.058384 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.058395 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.058402 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.060796 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.061367 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.061381 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.061391 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.061396 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.063578 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.064054 1073226 pod_ready.go:92] pod "etcd-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.064073 1073226 pod_ready.go:81] duration metric: took 5.750702ms for pod "etcd-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.064085 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.064142 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m02
	I0729 18:36:54.064152 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.064162 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.064171 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.066454 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.066989 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:54.067002 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.067015 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.067021 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.068946 1073226 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0729 18:36:54.069311 1073226 pod_ready.go:92] pod "etcd-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.069326 1073226 pod_ready.go:81] duration metric: took 5.234599ms for pod "etcd-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.069333 1073226 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.225738 1073226 request.go:629] Waited for 156.312151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m03
	I0729 18:36:54.225839 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156-m03
	I0729 18:36:54.225851 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.225861 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.225869 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.228817 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:54.425782 1073226 request.go:629] Waited for 196.398328ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.425865 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:54.425876 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.425889 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.425899 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.429350 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.430043 1073226 pod_ready.go:92] pod "etcd-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.430067 1073226 pod_ready.go:81] duration metric: took 360.728595ms for pod "etcd-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.430084 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.625120 1073226 request.go:629] Waited for 194.95698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:36:54.625196 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156
	I0729 18:36:54.625201 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.625208 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.625216 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.628252 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:54.825611 1073226 request.go:629] Waited for 196.408626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.825690 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:54.825698 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:54.825709 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:54.825719 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:54.830702 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:54.831596 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:54.831628 1073226 pod_ready.go:81] duration metric: took 401.527636ms for pod "kube-apiserver-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:54.831641 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.024789 1073226 request.go:629] Waited for 193.056819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:36:55.024885 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m02
	I0729 18:36:55.024896 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.024908 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.024918 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.027861 1073226 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 18:36:55.224911 1073226 request.go:629] Waited for 196.289833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:55.225029 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:55.225042 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.225067 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.225077 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.229002 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:55.229440 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:55.229460 1073226 pod_ready.go:81] duration metric: took 397.811151ms for pod "kube-apiserver-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.229471 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.425422 1073226 request.go:629] Waited for 195.866711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m03
	I0729 18:36:55.425558 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344156-m03
	I0729 18:36:55.425571 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.425590 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.425596 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.428925 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:55.624863 1073226 request.go:629] Waited for 195.279751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:55.624939 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:55.624947 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.624954 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.624961 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.629037 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:55.630287 1073226 pod_ready.go:92] pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:55.630306 1073226 pod_ready.go:81] duration metric: took 400.826411ms for pod "kube-apiserver-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.630319 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:55.824875 1073226 request.go:629] Waited for 194.476725ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:36:55.824974 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156
	I0729 18:36:55.824981 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:55.824992 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:55.825001 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:55.828768 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.025751 1073226 request.go:629] Waited for 196.356887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:56.025847 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:56.025858 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.025869 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.025879 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.029102 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.029728 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.029748 1073226 pod_ready.go:81] duration metric: took 399.418924ms for pod "kube-controller-manager-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.029760 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.224852 1073226 request.go:629] Waited for 194.999375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:36:56.224944 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m02
	I0729 18:36:56.224952 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.224972 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.224997 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.229090 1073226 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 18:36:56.425629 1073226 request.go:629] Waited for 195.360713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:56.425719 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:56.425727 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.425735 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.425743 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.429462 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.430009 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.430029 1073226 pod_ready.go:81] duration metric: took 400.261416ms for pod "kube-controller-manager-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.430039 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.625157 1073226 request.go:629] Waited for 195.039979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m03
	I0729 18:36:56.625236 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344156-m03
	I0729 18:36:56.625241 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.625248 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.625253 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.628682 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.825729 1073226 request.go:629] Waited for 196.33857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:56.825825 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:56.825836 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:56.825847 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:56.825858 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:56.829208 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:56.829806 1073226 pod_ready.go:92] pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:56.829830 1073226 pod_ready.go:81] duration metric: took 399.784132ms for pod "kube-controller-manager-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:56.829844 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.024857 1073226 request.go:629] Waited for 194.932413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:36:57.024944 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4p5r9
	I0729 18:36:57.024952 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.024960 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.024964 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.028894 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.224851 1073226 request.go:629] Waited for 195.30286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:57.224909 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:57.224914 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.224921 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.224927 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.228320 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.229288 1073226 pod_ready.go:92] pod "kube-proxy-4p5r9" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:57.229310 1073226 pod_ready.go:81] duration metric: took 399.458197ms for pod "kube-proxy-4p5r9" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.229324 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.425101 1073226 request.go:629] Waited for 195.687697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:36:57.425186 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282
	I0729 18:36:57.425194 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.425202 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.425210 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.429043 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.625314 1073226 request.go:629] Waited for 195.379021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:57.625391 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:57.625398 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.625407 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.625414 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.628918 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:57.629479 1073226 pod_ready.go:92] pod "kube-proxy-gp282" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:57.629502 1073226 pod_ready.go:81] duration metric: took 400.16774ms for pod "kube-proxy-gp282" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.629512 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w68jl" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:57.825577 1073226 request.go:629] Waited for 195.979776ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w68jl
	I0729 18:36:57.825644 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-proxy-w68jl
	I0729 18:36:57.825649 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:57.825657 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:57.825664 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:57.829084 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.025115 1073226 request.go:629] Waited for 195.341791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:58.025190 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:58.025196 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.025204 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.025212 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.029074 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.029735 1073226 pod_ready.go:92] pod "kube-proxy-w68jl" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.029756 1073226 pod_ready.go:81] duration metric: took 400.236648ms for pod "kube-proxy-w68jl" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.029766 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.224827 1073226 request.go:629] Waited for 194.944952ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:36:58.224991 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156
	I0729 18:36:58.225011 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.225039 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.225064 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.228220 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.425657 1073226 request.go:629] Waited for 196.363001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:58.425718 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156
	I0729 18:36:58.425723 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.425731 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.425738 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.429029 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.429594 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.429613 1073226 pod_ready.go:81] duration metric: took 399.839055ms for pod "kube-scheduler-ha-344156" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.429623 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.625772 1073226 request.go:629] Waited for 196.067134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:36:58.625847 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m02
	I0729 18:36:58.625852 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.625859 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.625864 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.629267 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.825657 1073226 request.go:629] Waited for 195.355459ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:58.825720 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m02
	I0729 18:36:58.825725 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:58.825732 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:58.825738 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:58.829198 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:58.829863 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:58.829882 1073226 pod_ready.go:81] duration metric: took 400.250514ms for pod "kube-scheduler-ha-344156-m02" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:58.829892 1073226 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:59.024864 1073226 request.go:629] Waited for 194.90098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m03
	I0729 18:36:59.024942 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344156-m03
	I0729 18:36:59.024949 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.024981 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.024991 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.028464 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.225571 1073226 request.go:629] Waited for 196.360643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:59.225649 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes/ha-344156-m03
	I0729 18:36:59.225655 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.225662 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.225666 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.229072 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.229751 1073226 pod_ready.go:92] pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 18:36:59.229778 1073226 pod_ready.go:81] duration metric: took 399.879356ms for pod "kube-scheduler-ha-344156-m03" in "kube-system" namespace to be "Ready" ...
	I0729 18:36:59.229790 1073226 pod_ready.go:38] duration metric: took 5.201458046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 18:36:59.229810 1073226 api_server.go:52] waiting for apiserver process to appear ...
	I0729 18:36:59.229867 1073226 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:36:59.246295 1073226 api_server.go:72] duration metric: took 18.522942026s to wait for apiserver process to appear ...
	I0729 18:36:59.246316 1073226 api_server.go:88] waiting for apiserver healthz status ...
	I0729 18:36:59.246338 1073226 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I0729 18:36:59.252593 1073226 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I0729 18:36:59.252662 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/version
	I0729 18:36:59.252672 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.252683 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.252691 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.253560 1073226 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 18:36:59.253633 1073226 api_server.go:141] control plane version: v1.30.3
	I0729 18:36:59.253652 1073226 api_server.go:131] duration metric: took 7.327939ms to wait for apiserver health ...
	I0729 18:36:59.253661 1073226 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 18:36:59.425463 1073226 request.go:629] Waited for 171.694263ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.425569 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.425581 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.425594 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.425598 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.432633 1073226 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 18:36:59.438546 1073226 system_pods.go:59] 24 kube-system pods found
	I0729 18:36:59.438574 1073226 system_pods.go:61] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:36:59.438579 1073226 system_pods.go:61] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:36:59.438583 1073226 system_pods.go:61] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:36:59.438586 1073226 system_pods.go:61] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:36:59.438592 1073226 system_pods.go:61] "etcd-ha-344156-m03" [708c9812-8669-44a2-8045-abfee39173b6] Running
	I0729 18:36:59.438595 1073226 system_pods.go:61] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:36:59.438598 1073226 system_pods.go:61] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:36:59.438603 1073226 system_pods.go:61] "kindnet-ks57n" [81bef3d8-fc4e-459e-a7d1-bb6406706ffc] Running
	I0729 18:36:59.438607 1073226 system_pods.go:61] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:36:59.438613 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:36:59.438616 1073226 system_pods.go:61] "kube-apiserver-ha-344156-m03" [caa0c4ad-7c27-4b32-9b27-8c31b698ff94] Running
	I0729 18:36:59.438621 1073226 system_pods.go:61] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:36:59.438628 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:36:59.438631 1073226 system_pods.go:61] "kube-controller-manager-ha-344156-m03" [c51f5210-8b7f-40b6-beef-07116362f52b] Running
	I0729 18:36:59.438634 1073226 system_pods.go:61] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:36:59.438638 1073226 system_pods.go:61] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:36:59.438642 1073226 system_pods.go:61] "kube-proxy-w68jl" [973b384e-931f-462f-b46b-fb2b28400627] Running
	I0729 18:36:59.438645 1073226 system_pods.go:61] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:36:59.438649 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:36:59.438652 1073226 system_pods.go:61] "kube-scheduler-ha-344156-m03" [3ea0d519-3b7c-4d22-a442-9d58d43876c3] Running
	I0729 18:36:59.438655 1073226 system_pods.go:61] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:36:59.438657 1073226 system_pods.go:61] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:36:59.438660 1073226 system_pods.go:61] "kube-vip-ha-344156-m03" [7deb3adf-e964-4206-a768-380b5425bb9e] Running
	I0729 18:36:59.438663 1073226 system_pods.go:61] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:36:59.438668 1073226 system_pods.go:74] duration metric: took 184.998775ms to wait for pod list to return data ...
	I0729 18:36:59.438678 1073226 default_sa.go:34] waiting for default service account to be created ...
	I0729 18:36:59.625117 1073226 request.go:629] Waited for 186.346422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:36:59.625195 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/default/serviceaccounts
	I0729 18:36:59.625202 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.625212 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.625217 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.628921 1073226 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 18:36:59.629064 1073226 default_sa.go:45] found service account: "default"
	I0729 18:36:59.629082 1073226 default_sa.go:55] duration metric: took 190.396612ms for default service account to be created ...
	I0729 18:36:59.629095 1073226 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 18:36:59.825557 1073226 request.go:629] Waited for 196.368467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.825621 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/namespaces/kube-system/pods
	I0729 18:36:59.825626 1073226 round_trippers.go:469] Request Headers:
	I0729 18:36:59.825634 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:36:59.825640 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:36:59.833031 1073226 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 18:36:59.839260 1073226 system_pods.go:86] 24 kube-system pods found
	I0729 18:36:59.839284 1073226 system_pods.go:89] "coredns-7db6d8ff4d-5slmg" [f2aca93c-209e-48b6-a9a5-692bdf185129] Running
	I0729 18:36:59.839290 1073226 system_pods.go:89] "coredns-7db6d8ff4d-h5h7v" [b2b09553-dd59-44ab-a738-41e872defd34] Running
	I0729 18:36:59.839294 1073226 system_pods.go:89] "etcd-ha-344156" [2e8b83d5-7017-4608-800a-47e3400d7202] Running
	I0729 18:36:59.839298 1073226 system_pods.go:89] "etcd-ha-344156-m02" [b5f24011-5d19-4d79-9ce3-512d04f85f7b] Running
	I0729 18:36:59.839305 1073226 system_pods.go:89] "etcd-ha-344156-m03" [708c9812-8669-44a2-8045-abfee39173b6] Running
	I0729 18:36:59.839311 1073226 system_pods.go:89] "kindnet-84nqp" [f4e18e53-1c72-440f-82b2-bd1b4306af12] Running
	I0729 18:36:59.839320 1073226 system_pods.go:89] "kindnet-b85cc" [f441d276-e90f-447c-add8-ca3ff1cfe1b7] Running
	I0729 18:36:59.839330 1073226 system_pods.go:89] "kindnet-ks57n" [81bef3d8-fc4e-459e-a7d1-bb6406706ffc] Running
	I0729 18:36:59.839336 1073226 system_pods.go:89] "kube-apiserver-ha-344156" [21dabe32-a355-40dd-a5fa-07799c64e9c8] Running
	I0729 18:36:59.839347 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m02" [1b4acc44-23c7-4357-aa12-1b8c334ee75b] Running
	I0729 18:36:59.839354 1073226 system_pods.go:89] "kube-apiserver-ha-344156-m03" [caa0c4ad-7c27-4b32-9b27-8c31b698ff94] Running
	I0729 18:36:59.839359 1073226 system_pods.go:89] "kube-controller-manager-ha-344156" [f978182c-8550-4c1f-9bd2-2472243bcff3] Running
	I0729 18:36:59.839365 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m02" [64231ae8-189e-4209-b17f-ebc54671ae12] Running
	I0729 18:36:59.839370 1073226 system_pods.go:89] "kube-controller-manager-ha-344156-m03" [c51f5210-8b7f-40b6-beef-07116362f52b] Running
	I0729 18:36:59.839378 1073226 system_pods.go:89] "kube-proxy-4p5r9" [de6a7e19-b62d-4fb8-80f1-91f95f682925] Running
	I0729 18:36:59.839382 1073226 system_pods.go:89] "kube-proxy-gp282" [abf94303-b608-45b5-ae8b-9288be614a8f] Running
	I0729 18:36:59.839389 1073226 system_pods.go:89] "kube-proxy-w68jl" [973b384e-931f-462f-b46b-fb2b28400627] Running
	I0729 18:36:59.839392 1073226 system_pods.go:89] "kube-scheduler-ha-344156" [f553855a-6964-49d8-81e3-da002793db58] Running
	I0729 18:36:59.839396 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m02" [18eb83e2-8567-4b2d-a205-711e500cedca] Running
	I0729 18:36:59.839400 1073226 system_pods.go:89] "kube-scheduler-ha-344156-m03" [3ea0d519-3b7c-4d22-a442-9d58d43876c3] Running
	I0729 18:36:59.839406 1073226 system_pods.go:89] "kube-vip-ha-344156" [586052c5-c670-4957-b052-e2a7bf8bafb2] Running
	I0729 18:36:59.839412 1073226 system_pods.go:89] "kube-vip-ha-344156-m02" [a7d6e797-e7c1-457f-820e-a08d50f0a954] Running
	I0729 18:36:59.839417 1073226 system_pods.go:89] "kube-vip-ha-344156-m03" [7deb3adf-e964-4206-a768-380b5425bb9e] Running
	I0729 18:36:59.839427 1073226 system_pods.go:89] "storage-provisioner" [3ea00f25-122f-4a18-9d69-3606cfddf4d9] Running
	I0729 18:36:59.839436 1073226 system_pods.go:126] duration metric: took 210.333714ms to wait for k8s-apps to be running ...
	I0729 18:36:59.839449 1073226 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 18:36:59.839501 1073226 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:36:59.854781 1073226 system_svc.go:56] duration metric: took 15.326891ms WaitForService to wait for kubelet
	I0729 18:36:59.854808 1073226 kubeadm.go:582] duration metric: took 19.131460744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:36:59.854832 1073226 node_conditions.go:102] verifying NodePressure condition ...
	I0729 18:37:00.025220 1073226 request.go:629] Waited for 170.267627ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.225:8443/api/v1/nodes
	I0729 18:37:00.025299 1073226 round_trippers.go:463] GET https://192.168.39.225:8443/api/v1/nodes
	I0729 18:37:00.025306 1073226 round_trippers.go:469] Request Headers:
	I0729 18:37:00.025316 1073226 round_trippers.go:473]     Accept: application/json, */*
	I0729 18:37:00.025322 1073226 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 18:37:00.030361 1073226 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 18:37:00.031361 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031383 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031395 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031400 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031405 1073226 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 18:37:00.031410 1073226 node_conditions.go:123] node cpu capacity is 2
	I0729 18:37:00.031418 1073226 node_conditions.go:105] duration metric: took 176.580777ms to run NodePressure ...
	I0729 18:37:00.031436 1073226 start.go:241] waiting for startup goroutines ...
	I0729 18:37:00.031460 1073226 start.go:255] writing updated cluster config ...
	I0729 18:37:00.031782 1073226 ssh_runner.go:195] Run: rm -f paused
	I0729 18:37:00.086434 1073226 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 18:37:00.088481 1073226 out.go:177] * Done! kubectl is now configured to use "ha-344156" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.874690282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4d8cc79-ae00-418b-8a24-b70d830b2776 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.876760379Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23de7e55-14bc-4777-a630-1e26fdc7ba7c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.877215767Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278518877193774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23de7e55-14bc-4777-a630-1e26fdc7ba7c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.877979614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a836f31-1f6d-4ef7-a350-4f70c61f0fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.878054514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a836f31-1f6d-4ef7-a350-4f70c61f0fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.878354725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a836f31-1f6d-4ef7-a350-4f70c61f0fa3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.899372135Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ae6a0c46-4211-4d7e-a3e0-7228bbe0abb8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.900558257Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-9sbfq,Uid:f11563c5-3507-44f0-a103-1e8462494e13,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278221317798975,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:37:01.004554243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-h5h7v,Uid:b2b09553-dd59-44ab-a738-41e872defd34,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722278090417915698,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:50.091754249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3ea00f25-122f-4a18-9d69-3606cfddf4d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278090397601719,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T18:34:50.089494513Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-5slmg,Uid:f2aca93c-209e-48b6-a9a5-692bdf185129,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722278090390442126,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:50.082932184Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&PodSandboxMetadata{Name:kindnet-84nqp,Uid:f4e18e53-1c72-440f-82b2-bd1b4306af12,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278076425487658,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:36.111643087Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&PodSandboxMetadata{Name:kube-proxy-gp282,Uid:abf94303-b608-45b5-ae8b-9288be614a8f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278076391754673,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T18:34:36.082951745Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&PodSandboxMetadata{Name:etcd-ha-344156,Uid:67610b75999e06603675bc1a64d5ef7d,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722278056565120490,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.225:2379,kubernetes.io/config.hash: 67610b75999e06603675bc1a64d5ef7d,kubernetes.io/config.seen: 2024-07-29T18:34:16.075600528Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-344156,Uid:70bafc7f0ed9afe903828ea70a6c8bbb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056555341094,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:m
ap[string]string{kubernetes.io/config.hash: 70bafc7f0ed9afe903828ea70a6c8bbb,kubernetes.io/config.seen: 2024-07-29T18:34:16.075599613Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-344156,Uid:5d17047d55559cfd90852a780672fb93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056554194069,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5d17047d55559cfd90852a780672fb93,kubernetes.io/config.seen: 2024-07-29T18:34:16.075598803Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&PodSandboxMetadata{Name:kube-a
piserver-ha-344156,Uid:d61da37ea38b5727b5710cdad0fc95fd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056534928321,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.225:8443,kubernetes.io/config.hash: d61da37ea38b5727b5710cdad0fc95fd,kubernetes.io/config.seen: 2024-07-29T18:34:16.075593783Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-344156,Uid:30243da5f1a98e23c72326dd278a562e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722278056530144998,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.c
ontainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30243da5f1a98e23c72326dd278a562e,kubernetes.io/config.seen: 2024-07-29T18:34:16.075597470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ae6a0c46-4211-4d7e-a3e0-7228bbe0abb8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.901145395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4759b7-9afd-45e2-a335-7485470529f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.901200864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4759b7-9afd-45e2-a335-7485470529f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.901489071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf4759b7-9afd-45e2-a335-7485470529f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.920888672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c710cf8e-7df6-4c67-8734-5e7d3c4bb8a0 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.920963684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c710cf8e-7df6-4c67-8734-5e7d3c4bb8a0 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.921917790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8690c9ab-2e74-4692-baee-ad94a3d08594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.922843275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278518922821917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8690c9ab-2e74-4692-baee-ad94a3d08594 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.923537582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60ee3c00-db5f-4af7-ba72-c5bec5ee99a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.923595246Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60ee3c00-db5f-4af7-ba72-c5bec5ee99a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.923821713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60ee3c00-db5f-4af7-ba72-c5bec5ee99a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.960005240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa1eff7b-9198-431e-9762-410f5f2910e1 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.960093733Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa1eff7b-9198-431e-9762-410f5f2910e1 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.962025844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae016c8d-24ab-4051-ad0d-44c9b451559c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.963762615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278518963738238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae016c8d-24ab-4051-ad0d-44c9b451559c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.964358554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64e4bc96-bed2-433d-9918-e54beb088f51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.964461108Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64e4bc96-bed2-433d-9918-e54beb088f51 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:41:58 ha-344156 crio[680]: time="2024-07-29 18:41:58.964879433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278222484902442,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090682794373,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0420967445f9211bb2a8fcd8373564a68efa30847b800b0baa219266c006cc72,PodSandboxId:aee8f75d6b1bbb3fb9c1d5339f35d5df5cf4d72ba4fc03e063c97a60693b2321,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278090665637409,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278090616073402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-20
9e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722278078648045390,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172227807
6564427868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79,PodSandboxId:9d199e4c3c06ca4ceb4ada9b80a1fff0ef24acdcf1fc9899060d41b381f9d867,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222780595
62262847,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70bafc7f0ed9afe903828ea70a6c8bbb,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278056834785047,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278056768210507,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4,PodSandboxId:54990a7607809732d80dbb19df04598ee30197286b1d0daf1deaa436f2b03d03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278056772614281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472,PodSandboxId:60907e40ccbbf42ef085bf897b5855fd240e5105657171fe08cadbcd811bcf86,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278056748446079,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64e4bc96-bed2-433d-9918-e54beb088f51 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d152449ddedd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   98fcabecdf16c       busybox-fc5497c4f-9sbfq
	1a4d13ace439f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   331a36b1d7af6       coredns-7db6d8ff4d-h5h7v
	0420967445f92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   aee8f75d6b1bb       storage-provisioner
	7d0acef755a4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   3bc8a1c2175a3       coredns-7db6d8ff4d-5slmg
	88c61cb999665       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   5312fee5fcd07       kindnet-84nqp
	ea6501e2c6d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   f041673054c6d       kube-proxy-gp282
	df682abbd9767       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   9d199e4c3c06c       kube-vip-ha-344156
	cea7dd8ee7d18       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   ec39a320a672e       kube-scheduler-ha-344156
	15f9d79f9c968       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   54990a7607809       kube-controller-manager-ha-344156
	fc27c145e7b72       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   5e0320966c0af       etcd-ha-344156
	24d097bf3e16a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   60907e40ccbbf       kube-apiserver-ha-344156
	
	
	==> coredns [1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67] <==
	[INFO] 10.244.0.4:46352 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127712s
	[INFO] 10.244.0.4:46368 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105717s
	[INFO] 10.244.1.2:52208 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002451s
	[INFO] 10.244.1.2:41217 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133343s
	[INFO] 10.244.1.2:49751 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001332799s
	[INFO] 10.244.1.2:41663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101756s
	[INFO] 10.244.2.2:42699 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103084s
	[INFO] 10.244.2.2:43982 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096471s
	[INFO] 10.244.2.2:48234 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064109s
	[INFO] 10.244.2.2:58544 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127531s
	[INFO] 10.244.2.2:43646 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097904s
	[INFO] 10.244.0.4:41454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007042s
	[INFO] 10.244.1.2:56019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130286s
	[INFO] 10.244.1.2:49552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000419229s
	[INFO] 10.244.1.2:42570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019871s
	[INFO] 10.244.1.2:35841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085394s
	[INFO] 10.244.2.2:38179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154252s
	[INFO] 10.244.2.2:54595 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095931s
	[INFO] 10.244.0.4:52521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102943s
	[INFO] 10.244.0.4:41421 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122912s
	[INFO] 10.244.1.2:51311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000262883s
	[INFO] 10.244.1.2:51083 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108384s
	[INFO] 10.244.2.2:49034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138814s
	[INFO] 10.244.2.2:33015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141033s
	[INFO] 10.244.2.2:33854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124542s
	
	
	==> coredns [7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6] <==
	[INFO] 10.244.0.4:48527 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.009667897s
	[INFO] 10.244.1.2:39280 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000586469s
	[INFO] 10.244.1.2:47729 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001573362s
	[INFO] 10.244.2.2:32959 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001674804s
	[INFO] 10.244.0.4:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137454s
	[INFO] 10.244.0.4:45474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003415625s
	[INFO] 10.244.0.4:42044 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293336s
	[INFO] 10.244.0.4:42246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000257435s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621784s
	[INFO] 10.244.1.2:47789 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179788s
	[INFO] 10.244.1.2:51271 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115306s
	[INFO] 10.244.1.2:60584 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160548s
	[INFO] 10.244.2.2:39080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143675s
	[INFO] 10.244.2.2:57667 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587169s
	[INFO] 10.244.2.2:36002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000958528s
	[INFO] 10.244.0.4:46689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001122s
	[INFO] 10.244.0.4:53528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068803s
	[INFO] 10.244.0.4:58879 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007922s
	[INFO] 10.244.2.2:40671 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165257s
	[INFO] 10.244.2.2:52385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072909s
	[INFO] 10.244.0.4:40200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101268s
	[INFO] 10.244.0.4:60214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092204s
	[INFO] 10.244.1.2:45394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209017s
	[INFO] 10.244.1.2:53252 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072648s
	[INFO] 10.244.2.2:37567 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168035s
	
	
	==> describe nodes <==
	Name:               ha-344156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:34:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:26 +0000   Mon, 29 Jul 2024 18:34:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-344156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be7f4c1228de4ae58c65b2a0531270c4
	  System UUID:                be7f4c12-28de-4ae5-8c65-b2a0531270c4
	  Boot ID:                    14c798b1-a7f8-4045-a5cc-f99e886c885f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sbfq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 coredns-7db6d8ff4d-5slmg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m23s
	  kube-system                 coredns-7db6d8ff4d-h5h7v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m23s
	  kube-system                 etcd-ha-344156                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m36s
	  kube-system                 kindnet-84nqp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m23s
	  kube-system                 kube-apiserver-ha-344156             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-controller-manager-ha-344156    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-proxy-gp282                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-scheduler-ha-344156             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-vip-ha-344156                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m22s  kube-proxy       
	  Normal  Starting                 7m36s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m36s  kubelet          Node ha-344156 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s  kubelet          Node ha-344156 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s  kubelet          Node ha-344156 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m24s  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal  NodeReady                7m9s   kubelet          Node ha-344156 status is now: NodeReady
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal  RegisteredNode           5m5s   node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	
	
	Name:               ha-344156-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:35:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:38:37 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 18:37:25 +0000   Mon, 29 Jul 2024 18:39:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-344156-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae271825042248168626e86031e0e80b
	  System UUID:                ae271825-0422-4816-8626-e86031e0e80b
	  Boot ID:                    a5673abc-82e9-4e7a-95fa-3067a351f12f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np547                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 etcd-ha-344156-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m34s
	  kube-system                 kindnet-b85cc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m36s
	  kube-system                 kube-apiserver-ha-344156-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-controller-manager-ha-344156-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 kube-proxy-4p5r9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-scheduler-ha-344156-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 kube-vip-ha-344156-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m36s (x8 over 6m37s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x8 over 6m37s)  kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x7 over 6m37s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m34s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           6m18s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  NodeNotReady             2m39s                  node-controller  Node ha-344156-m02 status is now: NodeNotReady
	
	
	Name:               ha-344156-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:37:05 +0000   Mon, 29 Jul 2024 18:36:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-344156-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 009a6c7b1b2049db970288d43db02f16
	  System UUID:                009a6c7b-1b20-49db-9702-88d43db02f16
	  Boot ID:                    78078a70-f452-4e76-8a2f-cc9a62ee6c44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7sxh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 etcd-ha-344156-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m23s
	  kube-system                 kindnet-ks57n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m25s
	  kube-system                 kube-apiserver-ha-344156-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-controller-manager-ha-344156-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-proxy-w68jl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-scheduler-ha-344156-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-vip-ha-344156-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node ha-344156-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	
	
	Name:               ha-344156-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_37_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:41:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:37:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:38:23 +0000   Mon, 29 Jul 2024 18:38:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-344156-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd3c9a6740fc4ec3a7f2c8b9b2357693
	  System UUID:                cd3c9a67-40fc-4ec3-a7f2-c8b9b2357693
	  Boot ID:                    feaae67d-1b81-44aa-891a-7ad9026e22d0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c84jp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m22s
	  kube-system                 kube-proxy-qjzd6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m24s (x2 over 4m24s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m24s (x2 over 4m24s)  kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m24s (x2 over 4m24s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  RegisteredNode           4m20s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node ha-344156-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 18:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050664] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040228] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.762758] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.350772] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.587329] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul29 18:34] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.055622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058895] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.187111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118732] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.257910] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.135704] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.319915] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.051986] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.074788] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.534370] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.052219] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 18:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29] <==
	{"level":"warn","ts":"2024-07-29T18:41:58.836859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:58.935866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.215502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.222004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.225508Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.237444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.238859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.248612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.255229Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.258983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.262407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.273806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.283237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.2897Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.293547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.296248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.302804Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.309414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.315271Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.318147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.320403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.325128Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.331247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.336454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T18:41:59.337196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fb0a52f06b768c2d","from":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:41:59 up 8 min,  0 users,  load average: 0.28, 0.34, 0.18
	Linux ha-344156 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62] <==
	I0729 18:41:19.807163       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:41:29.807835       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:41:29.807918       1 main.go:299] handling current node
	I0729 18:41:29.807942       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:41:29.808008       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:41:29.808248       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:41:29.808373       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:41:29.808501       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:41:29.808531       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:41:39.799775       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:41:39.799867       1 main.go:299] handling current node
	I0729 18:41:39.799917       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:41:39.799934       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:41:39.800115       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:41:39.800140       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:41:39.800198       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:41:39.800217       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:41:49.799726       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:41:49.799866       1 main.go:299] handling current node
	I0729 18:41:49.799930       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:41:49.799968       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:41:49.800142       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:41:49.800164       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:41:49.800233       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:41:49.800252       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472] <==
	I0729 18:34:22.992198       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 18:34:23.013687       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 18:34:23.203550       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 18:34:36.052177       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 18:34:36.129994       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 18:35:23.876523       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0729 18:35:23.876591       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0729 18:35:23.876632       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.407µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0729 18:35:23.878428       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0729 18:35:23.878578       1 timeout.go:142] post-timeout activity - time-elapsed: 2.219047ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0729 18:37:03.871470       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37342: use of closed network connection
	E0729 18:37:04.057833       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37356: use of closed network connection
	E0729 18:37:04.246466       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37382: use of closed network connection
	E0729 18:37:04.432648       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37402: use of closed network connection
	E0729 18:37:04.620133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37426: use of closed network connection
	E0729 18:37:04.808658       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37448: use of closed network connection
	E0729 18:37:04.980084       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37468: use of closed network connection
	E0729 18:37:05.158588       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37488: use of closed network connection
	E0729 18:37:05.354557       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37510: use of closed network connection
	E0729 18:37:05.641690       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37528: use of closed network connection
	E0729 18:37:05.820252       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37552: use of closed network connection
	E0729 18:37:06.009065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37580: use of closed network connection
	E0729 18:37:06.380684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37632: use of closed network connection
	E0729 18:37:06.570438       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:37652: use of closed network connection
	W0729 18:39:01.543884       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.225]
	
	
	==> kube-controller-manager [15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4] <==
	I0729 18:36:35.300078       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344156-m03"
	I0729 18:37:01.026156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.740018ms"
	I0729 18:37:01.059485       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.260292ms"
	I0729 18:37:01.256428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.727077ms"
	I0729 18:37:01.342248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.174449ms"
	I0729 18:37:01.370811       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.436309ms"
	I0729 18:37:01.370936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.485µs"
	I0729 18:37:01.456236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.75739ms"
	I0729 18:37:01.456678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.365µs"
	I0729 18:37:01.513939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.934945ms"
	I0729 18:37:01.515211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="186.88µs"
	I0729 18:37:02.720924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.996987ms"
	I0729 18:37:02.721436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.992µs"
	I0729 18:37:02.779607       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.03376ms"
	I0729 18:37:02.779677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.309µs"
	I0729 18:37:03.388871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.372464ms"
	I0729 18:37:03.389954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.586µs"
	I0729 18:37:36.003494       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-344156-m04\" does not exist"
	E0729 18:37:36.016579       1 certificate_controller.go:146] Sync csr-82bwg failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-82bwg": the object has been modified; please apply your changes to the latest version and try again
	I0729 18:37:36.050956       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344156-m04" podCIDRs=["10.244.3.0/24"]
	I0729 18:37:40.311641       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344156-m04"
	I0729 18:38:23.223359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	I0729 18:39:20.223021       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	I0729 18:39:20.274448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.549805ms"
	I0729 18:39:20.274546       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.441µs"
	
	
	==> kube-proxy [ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b] <==
	I0729 18:34:36.768861       1 server_linux.go:69] "Using iptables proxy"
	I0729 18:34:36.822886       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	I0729 18:34:36.879013       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:34:36.879048       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:34:36.879064       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:34:36.882396       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:34:36.883763       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:34:36.883815       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:34:36.886898       1 config.go:192] "Starting service config controller"
	I0729 18:34:36.889509       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:34:36.889619       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:34:36.889655       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:34:36.891706       1 config.go:319] "Starting node config controller"
	I0729 18:34:36.891740       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:34:36.989964       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:34:36.990034       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:34:36.991946       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4] <==
	I0729 18:37:01.014413       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-9sbfq" node="ha-344156"
	E0729 18:37:01.014649       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-np547\": pod busybox-fc5497c4f-np547 is already assigned to node \"ha-344156-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-np547" node="ha-344156-m02"
	E0729 18:37:01.014689       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 362a4dc2-ca83-4e79-a3a8-58d174f4c6c9(default/busybox-fc5497c4f-np547) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-np547"
	E0729 18:37:01.014706       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-np547\": pod busybox-fc5497c4f-np547 is already assigned to node \"ha-344156-m02\"" pod="default/busybox-fc5497c4f-np547"
	I0729 18:37:01.014885       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-np547" node="ha-344156-m02"
	E0729 18:37:36.090019       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-wd88n\": pod kube-proxy-wd88n is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-wd88n" node="ha-344156-m04"
	E0729 18:37:36.090421       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-qb94z\": pod kindnet-qb94z is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-qb94z" node="ha-344156-m04"
	E0729 18:37:36.092638       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod b96b5fac-f230-4c67-a7a5-bdf3591ca949(kube-system/kube-proxy-wd88n) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-wd88n"
	E0729 18:37:36.092741       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bd8366ab-746c-4ca4-b11c-bf9081fbcf7c(kube-system/kindnet-qb94z) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-qb94z"
	E0729 18:37:36.092958       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-qb94z\": pod kindnet-qb94z is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-qb94z"
	I0729 18:37:36.093026       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-qb94z" node="ha-344156-m04"
	E0729 18:37:36.092848       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-wd88n\": pod kube-proxy-wd88n is already assigned to node \"ha-344156-m04\"" pod="kube-system/kube-proxy-wd88n"
	I0729 18:37:36.097444       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-wd88n" node="ha-344156-m04"
	E0729 18:37:36.231093       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4q27q\": pod kindnet-4q27q is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4q27q" node="ha-344156-m04"
	E0729 18:37:36.231689       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5aa608fd-1380-4d1f-94ca-56974da8d2c9(kube-system/kindnet-4q27q) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-4q27q"
	E0729 18:37:36.231884       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4q27q\": pod kindnet-4q27q is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-4q27q"
	I0729 18:37:36.232096       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-4q27q" node="ha-344156-m04"
	E0729 18:37:36.231370       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-rdbxm\": pod kube-proxy-rdbxm is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-rdbxm" node="ha-344156-m04"
	E0729 18:37:36.232528       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb08b275-293b-47c4-91ac-2281cd4eee08(kube-system/kube-proxy-rdbxm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-rdbxm"
	E0729 18:37:36.232576       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-rdbxm\": pod kube-proxy-rdbxm is already assigned to node \"ha-344156-m04\"" pod="kube-system/kube-proxy-rdbxm"
	I0729 18:37:36.232602       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-rdbxm" node="ha-344156-m04"
	E0729 18:37:38.016658       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mg7rg\": pod kindnet-mg7rg is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mg7rg" node="ha-344156-m04"
	E0729 18:37:38.018543       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f6aa64ee-b737-4975-9d11-00d78dbc3fe6(kube-system/kindnet-mg7rg) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mg7rg"
	E0729 18:37:38.020350       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mg7rg\": pod kindnet-mg7rg is already assigned to node \"ha-344156-m04\"" pod="kube-system/kindnet-mg7rg"
	I0729 18:37:38.020440       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mg7rg" node="ha-344156-m04"
	
	
	==> kubelet <==
	Jul 29 18:37:23 ha-344156 kubelet[1368]: E0729 18:37:23.118090    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:37:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:37:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:38:23 ha-344156 kubelet[1368]: E0729 18:38:23.120861    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:38:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:38:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:39:23 ha-344156 kubelet[1368]: E0729 18:39:23.117779    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:39:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:39:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:40:23 ha-344156 kubelet[1368]: E0729 18:40:23.119914    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:40:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:40:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:41:23 ha-344156 kubelet[1368]: E0729 18:41:23.118050    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:41:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:41:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:41:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:41:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344156 -n ha-344156
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (51.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-344156 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-344156 -v=7 --alsologtostderr
E0729 18:43:00.968760 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:43:28.650813 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-344156 -v=7 --alsologtostderr: exit status 82 (2m1.89239247s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344156-m04"  ...
	* Stopping node "ha-344156-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:42:00.833918 1078992 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:42:00.834049 1078992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:42:00.834060 1078992 out.go:304] Setting ErrFile to fd 2...
	I0729 18:42:00.834064 1078992 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:42:00.834254 1078992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:42:00.834544 1078992 out.go:298] Setting JSON to false
	I0729 18:42:00.834645 1078992 mustload.go:65] Loading cluster: ha-344156
	I0729 18:42:00.835094 1078992 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:42:00.835190 1078992 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:42:00.835390 1078992 mustload.go:65] Loading cluster: ha-344156
	I0729 18:42:00.835575 1078992 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:42:00.835606 1078992 stop.go:39] StopHost: ha-344156-m04
	I0729 18:42:00.836159 1078992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:42:00.836224 1078992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:42:00.851384 1078992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0729 18:42:00.851904 1078992 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:42:00.852536 1078992 main.go:141] libmachine: Using API Version  1
	I0729 18:42:00.852565 1078992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:42:00.852874 1078992 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:42:00.855225 1078992 out.go:177] * Stopping node "ha-344156-m04"  ...
	I0729 18:42:00.856513 1078992 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:42:00.856541 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:42:00.856777 1078992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:42:00.856808 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:42:00.859385 1078992 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:42:00.859865 1078992 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:37:22 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:42:00.859890 1078992 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:42:00.860148 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:42:00.860326 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:42:00.860472 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:42:00.860618 1078992 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:42:00.950928 1078992 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:42:01.003915 1078992 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:42:01.058585 1078992 main.go:141] libmachine: Stopping "ha-344156-m04"...
	I0729 18:42:01.058613 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:42:01.060241 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .Stop
	I0729 18:42:01.063593 1078992 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 0/120
	I0729 18:42:02.272511 1078992 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:42:02.273891 1078992 main.go:141] libmachine: Machine "ha-344156-m04" was stopped.
	I0729 18:42:02.273939 1078992 stop.go:75] duration metric: took 1.417401492s to stop
	I0729 18:42:02.273971 1078992 stop.go:39] StopHost: ha-344156-m03
	I0729 18:42:02.274250 1078992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:42:02.274286 1078992 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:42:02.289659 1078992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38745
	I0729 18:42:02.290108 1078992 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:42:02.290641 1078992 main.go:141] libmachine: Using API Version  1
	I0729 18:42:02.290674 1078992 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:42:02.291037 1078992 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:42:02.292892 1078992 out.go:177] * Stopping node "ha-344156-m03"  ...
	I0729 18:42:02.293992 1078992 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:42:02.294012 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .DriverName
	I0729 18:42:02.294224 1078992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:42:02.294249 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHHostname
	I0729 18:42:02.296797 1078992 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:42:02.297213 1078992 main.go:141] libmachine: (ha-344156-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:5c:73", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:36:00 +0000 UTC Type:0 Mac:52:54:00:49:5c:73 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:ha-344156-m03 Clientid:01:52:54:00:49:5c:73}
	I0729 18:42:02.297243 1078992 main.go:141] libmachine: (ha-344156-m03) DBG | domain ha-344156-m03 has defined IP address 192.168.39.148 and MAC address 52:54:00:49:5c:73 in network mk-ha-344156
	I0729 18:42:02.297371 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHPort
	I0729 18:42:02.297525 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHKeyPath
	I0729 18:42:02.297691 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .GetSSHUsername
	I0729 18:42:02.297811 1078992 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m03/id_rsa Username:docker}
	I0729 18:42:02.382122 1078992 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:42:02.434693 1078992 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:42:02.490357 1078992 main.go:141] libmachine: Stopping "ha-344156-m03"...
	I0729 18:42:02.490390 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .GetState
	I0729 18:42:02.491997 1078992 main.go:141] libmachine: (ha-344156-m03) Calling .Stop
	I0729 18:42:02.495604 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 0/120
	I0729 18:42:03.496831 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 1/120
	I0729 18:42:04.498242 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 2/120
	I0729 18:42:05.499585 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 3/120
	I0729 18:42:06.501109 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 4/120
	I0729 18:42:07.503063 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 5/120
	I0729 18:42:08.504240 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 6/120
	I0729 18:42:09.505689 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 7/120
	I0729 18:42:10.506886 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 8/120
	I0729 18:42:11.508452 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 9/120
	I0729 18:42:12.510529 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 10/120
	I0729 18:42:13.512108 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 11/120
	I0729 18:42:14.513411 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 12/120
	I0729 18:42:15.514873 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 13/120
	I0729 18:42:16.516469 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 14/120
	I0729 18:42:17.518042 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 15/120
	I0729 18:42:18.519344 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 16/120
	I0729 18:42:19.520628 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 17/120
	I0729 18:42:20.522234 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 18/120
	I0729 18:42:21.523925 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 19/120
	I0729 18:42:22.526240 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 20/120
	I0729 18:42:23.527577 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 21/120
	I0729 18:42:24.529191 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 22/120
	I0729 18:42:25.530626 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 23/120
	I0729 18:42:26.532026 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 24/120
	I0729 18:42:27.533523 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 25/120
	I0729 18:42:28.535088 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 26/120
	I0729 18:42:29.536425 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 27/120
	I0729 18:42:30.537995 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 28/120
	I0729 18:42:31.539843 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 29/120
	I0729 18:42:32.542261 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 30/120
	I0729 18:42:33.543891 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 31/120
	I0729 18:42:34.545341 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 32/120
	I0729 18:42:35.547384 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 33/120
	I0729 18:42:36.548702 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 34/120
	I0729 18:42:37.550783 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 35/120
	I0729 18:42:38.552064 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 36/120
	I0729 18:42:39.553460 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 37/120
	I0729 18:42:40.554801 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 38/120
	I0729 18:42:41.556150 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 39/120
	I0729 18:42:42.557783 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 40/120
	I0729 18:42:43.559091 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 41/120
	I0729 18:42:44.560516 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 42/120
	I0729 18:42:45.561789 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 43/120
	I0729 18:42:46.563284 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 44/120
	I0729 18:42:47.565116 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 45/120
	I0729 18:42:48.566564 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 46/120
	I0729 18:42:49.567992 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 47/120
	I0729 18:42:50.569475 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 48/120
	I0729 18:42:51.570915 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 49/120
	I0729 18:42:52.572645 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 50/120
	I0729 18:42:53.574390 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 51/120
	I0729 18:42:54.575670 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 52/120
	I0729 18:42:55.577754 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 53/120
	I0729 18:42:56.579299 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 54/120
	I0729 18:42:57.580942 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 55/120
	I0729 18:42:58.582329 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 56/120
	I0729 18:42:59.583670 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 57/120
	I0729 18:43:00.584924 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 58/120
	I0729 18:43:01.586110 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 59/120
	I0729 18:43:02.588034 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 60/120
	I0729 18:43:03.589328 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 61/120
	I0729 18:43:04.590541 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 62/120
	I0729 18:43:05.591877 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 63/120
	I0729 18:43:06.593330 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 64/120
	I0729 18:43:07.594865 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 65/120
	I0729 18:43:08.596327 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 66/120
	I0729 18:43:09.597573 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 67/120
	I0729 18:43:10.598943 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 68/120
	I0729 18:43:11.600303 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 69/120
	I0729 18:43:12.601849 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 70/120
	I0729 18:43:13.603083 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 71/120
	I0729 18:43:14.605203 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 72/120
	I0729 18:43:15.606397 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 73/120
	I0729 18:43:16.607786 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 74/120
	I0729 18:43:17.609642 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 75/120
	I0729 18:43:18.610956 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 76/120
	I0729 18:43:19.612292 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 77/120
	I0729 18:43:20.613531 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 78/120
	I0729 18:43:21.615857 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 79/120
	I0729 18:43:22.617318 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 80/120
	I0729 18:43:23.618509 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 81/120
	I0729 18:43:24.619919 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 82/120
	I0729 18:43:25.621098 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 83/120
	I0729 18:43:26.622541 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 84/120
	I0729 18:43:27.624342 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 85/120
	I0729 18:43:28.625506 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 86/120
	I0729 18:43:29.626817 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 87/120
	I0729 18:43:30.628270 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 88/120
	I0729 18:43:31.629862 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 89/120
	I0729 18:43:32.631797 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 90/120
	I0729 18:43:33.633234 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 91/120
	I0729 18:43:34.634564 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 92/120
	I0729 18:43:35.635914 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 93/120
	I0729 18:43:36.637289 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 94/120
	I0729 18:43:37.638952 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 95/120
	I0729 18:43:38.640298 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 96/120
	I0729 18:43:39.641589 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 97/120
	I0729 18:43:40.643509 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 98/120
	I0729 18:43:41.644668 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 99/120
	I0729 18:43:42.646220 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 100/120
	I0729 18:43:43.647709 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 101/120
	I0729 18:43:44.648811 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 102/120
	I0729 18:43:45.650257 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 103/120
	I0729 18:43:46.651408 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 104/120
	I0729 18:43:47.652957 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 105/120
	I0729 18:43:48.654488 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 106/120
	I0729 18:43:49.655646 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 107/120
	I0729 18:43:50.657718 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 108/120
	I0729 18:43:51.658791 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 109/120
	I0729 18:43:52.660418 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 110/120
	I0729 18:43:53.661669 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 111/120
	I0729 18:43:54.662882 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 112/120
	I0729 18:43:55.664168 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 113/120
	I0729 18:43:56.665494 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 114/120
	I0729 18:43:57.666903 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 115/120
	I0729 18:43:58.668175 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 116/120
	I0729 18:43:59.669241 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 117/120
	I0729 18:44:00.670699 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 118/120
	I0729 18:44:01.672080 1078992 main.go:141] libmachine: (ha-344156-m03) Waiting for machine to stop 119/120
	I0729 18:44:02.673010 1078992 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:44:02.673085 1078992 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:44:02.674948 1078992 out.go:177] 
	W0729 18:44:02.676078 1078992 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:44:02.676108 1078992 out.go:239] * 
	* 
	W0729 18:44:02.680110 1078992 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:44:02.681417 1078992 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-344156 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344156 --wait=true -v=7 --alsologtostderr
E0729 18:45:34.135070 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:46:57.182320 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:48:00.968395 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-344156 --wait=true -v=7 --alsologtostderr: (4m14.174411715s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-344156
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344156 -n ha-344156
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344156 logs -n 25: (1.668158111s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m04 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp testdata/cp-test.txt                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m04_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03:/home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m03 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-344156 node stop m02 -v=7                                                    | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-344156 node start m02 -v=7                                                   | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-344156 -v=7                                                          | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-344156 -v=7                                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-344156 --wait=true -v=7                                                   | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-344156                                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:48 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:44:02
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:44:02.730210 1079484 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:44:02.730328 1079484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:02.730335 1079484 out.go:304] Setting ErrFile to fd 2...
	I0729 18:44:02.730339 1079484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:02.730502 1079484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:44:02.731198 1079484 out.go:298] Setting JSON to false
	I0729 18:44:02.732411 1079484 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8795,"bootTime":1722269848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:44:02.732489 1079484 start.go:139] virtualization: kvm guest
	I0729 18:44:02.738951 1079484 out.go:177] * [ha-344156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:44:02.740617 1079484 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:44:02.740629 1079484 notify.go:220] Checking for updates...
	I0729 18:44:02.743462 1079484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:44:02.744986 1079484 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:44:02.746656 1079484 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:44:02.747949 1079484 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:44:02.749256 1079484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:44:02.751029 1079484 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:02.751142 1079484 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:44:02.751618 1079484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:02.751697 1079484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:02.767922 1079484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0729 18:44:02.768343 1079484 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:02.769013 1079484 main.go:141] libmachine: Using API Version  1
	I0729 18:44:02.769040 1079484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:02.769487 1079484 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:02.769739 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.806043 1079484 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:44:02.807263 1079484 start.go:297] selected driver: kvm2
	I0729 18:44:02.807287 1079484 start.go:901] validating driver "kvm2" against &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:44:02.807481 1079484 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:44:02.807926 1079484 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:02.808031 1079484 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:44:02.822302 1079484 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:44:02.823019 1079484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:44:02.823078 1079484 cni.go:84] Creating CNI manager for ""
	I0729 18:44:02.823090 1079484 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:44:02.823154 1079484 start.go:340] cluster config:
	{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:44:02.823275 1079484 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:02.825024 1079484 out.go:177] * Starting "ha-344156" primary control-plane node in "ha-344156" cluster
	I0729 18:44:02.826235 1079484 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:44:02.826262 1079484 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:44:02.826273 1079484 cache.go:56] Caching tarball of preloaded images
	I0729 18:44:02.826359 1079484 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:44:02.826373 1079484 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:44:02.826506 1079484 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:44:02.826709 1079484 start.go:360] acquireMachinesLock for ha-344156: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:44:02.826765 1079484 start.go:364] duration metric: took 36.325µs to acquireMachinesLock for "ha-344156"
	I0729 18:44:02.826785 1079484 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:44:02.826794 1079484 fix.go:54] fixHost starting: 
	I0729 18:44:02.827101 1079484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:02.827142 1079484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:02.841119 1079484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I0729 18:44:02.841530 1079484 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:02.842007 1079484 main.go:141] libmachine: Using API Version  1
	I0729 18:44:02.842032 1079484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:02.842362 1079484 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:02.842597 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.842743 1079484 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:44:02.844155 1079484 fix.go:112] recreateIfNeeded on ha-344156: state=Running err=<nil>
	W0729 18:44:02.844177 1079484 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:44:02.846110 1079484 out.go:177] * Updating the running kvm2 "ha-344156" VM ...
	I0729 18:44:02.847243 1079484 machine.go:94] provisionDockerMachine start ...
	I0729 18:44:02.847267 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.847467 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:02.849542 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.849904 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:02.849939 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.850037 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:02.850206 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.850356 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.850484 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:02.850645 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.850832 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:02.850857 1079484 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:44:02.968565 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:44:02.968610 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:02.968876 1079484 buildroot.go:166] provisioning hostname "ha-344156"
	I0729 18:44:02.968907 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:02.969130 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:02.971906 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.972296 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:02.972339 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.972493 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:02.972700 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.972852 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.972987 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:02.973151 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.973315 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:02.973327 1079484 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156 && echo "ha-344156" | sudo tee /etc/hostname
	I0729 18:44:03.104854 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:44:03.104892 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.107749 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.108111 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.108135 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.108295 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.108487 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.108654 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.108788 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.108948 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:03.109138 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:03.109156 1079484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:44:03.227589 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:44:03.227626 1079484 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:44:03.227675 1079484 buildroot.go:174] setting up certificates
	I0729 18:44:03.227690 1079484 provision.go:84] configureAuth start
	I0729 18:44:03.227705 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:03.228005 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:44:03.230584 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.231128 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.231154 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.231327 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.233560 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.233940 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.233964 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.234141 1079484 provision.go:143] copyHostCerts
	I0729 18:44:03.234185 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:44:03.234223 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:44:03.234237 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:44:03.234302 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:44:03.234391 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:44:03.234409 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:44:03.234413 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:44:03.234437 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:44:03.234494 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:44:03.234509 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:44:03.234513 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:44:03.234533 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:44:03.234594 1079484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156 san=[127.0.0.1 192.168.39.225 ha-344156 localhost minikube]
	I0729 18:44:03.426259 1079484 provision.go:177] copyRemoteCerts
	I0729 18:44:03.426392 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:44:03.426466 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.429164 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.429601 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.429633 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.429797 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.429986 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.430171 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.430318 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:44:03.517181 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:44:03.517254 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:44:03.544507 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:44:03.544603 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 18:44:03.569730 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:44:03.569807 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:44:03.594135 1079484 provision.go:87] duration metric: took 366.429217ms to configureAuth
	I0729 18:44:03.594162 1079484 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:44:03.594389 1079484 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:03.594471 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.597059 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.597396 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.597420 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.597611 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.597810 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.597997 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.598122 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.598271 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:03.598437 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:03.598452 1079484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:45:34.484924 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:45:34.484957 1079484 machine.go:97] duration metric: took 1m31.637697454s to provisionDockerMachine
	I0729 18:45:34.484978 1079484 start.go:293] postStartSetup for "ha-344156" (driver="kvm2")
	I0729 18:45:34.484997 1079484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:45:34.485022 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.485421 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:45:34.485451 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.489040 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.489511 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.489533 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.489724 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.489951 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.490133 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.490297 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.577117 1079484 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:45:34.581256 1079484 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:45:34.581284 1079484 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:45:34.581357 1079484 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:45:34.581454 1079484 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:45:34.581465 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:45:34.581576 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:45:34.590639 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:45:34.614284 1079484 start.go:296] duration metric: took 129.292444ms for postStartSetup
	I0729 18:45:34.614330 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.614641 1079484 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 18:45:34.614672 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.617442 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.617867 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.617895 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.618003 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.618178 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.618363 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.618532 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	W0729 18:45:34.704442 1079484 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 18:45:34.704473 1079484 fix.go:56] duration metric: took 1m31.877678231s for fixHost
	I0729 18:45:34.704498 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.707218 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.707659 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.707694 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.707845 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.708054 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.708224 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.708331 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.708539 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:45:34.708733 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:45:34.708743 1079484 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:45:34.819719 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278734.779887170
	
	I0729 18:45:34.819745 1079484 fix.go:216] guest clock: 1722278734.779887170
	I0729 18:45:34.819755 1079484 fix.go:229] Guest: 2024-07-29 18:45:34.77988717 +0000 UTC Remote: 2024-07-29 18:45:34.704481201 +0000 UTC m=+92.011386189 (delta=75.405969ms)
	I0729 18:45:34.819781 1079484 fix.go:200] guest clock delta is within tolerance: 75.405969ms
	I0729 18:45:34.819787 1079484 start.go:83] releasing machines lock for "ha-344156", held for 1m31.993010327s
	I0729 18:45:34.819822 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.820128 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:45:34.822964 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.823358 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.823386 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.823560 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824198 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824406 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824503 1079484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:45:34.824557 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.824682 1079484 ssh_runner.go:195] Run: cat /version.json
	I0729 18:45:34.824705 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.827009 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827151 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827419 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.827448 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827555 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.827586 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827620 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.827770 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.827832 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.827905 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.827982 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.828068 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.828149 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.828222 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.919667 1079484 ssh_runner.go:195] Run: systemctl --version
	I0729 18:45:34.941286 1079484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:45:35.096516 1079484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:45:35.106106 1079484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:45:35.106176 1079484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:45:35.115314 1079484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:45:35.115334 1079484 start.go:495] detecting cgroup driver to use...
	I0729 18:45:35.115406 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:45:35.130639 1079484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:45:35.143931 1079484 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:45:35.143980 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:45:35.156758 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:45:35.169370 1079484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:45:35.315720 1079484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:45:35.459648 1079484 docker.go:233] disabling docker service ...
	I0729 18:45:35.459741 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:45:35.479249 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:45:35.494274 1079484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:45:35.665432 1079484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:45:35.808594 1079484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:45:35.821936 1079484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:45:35.840553 1079484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:45:35.840612 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.850571 1079484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:45:35.850632 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.860351 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.869812 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.879445 1079484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:45:35.889712 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.899605 1079484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.910758 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.920842 1079484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:45:35.930022 1079484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:45:35.938635 1079484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:45:36.076650 1079484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:45:44.138821 1079484 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.062129091s)
	I0729 18:45:44.138863 1079484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:45:44.138918 1079484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:45:44.144747 1079484 start.go:563] Will wait 60s for crictl version
	I0729 18:45:44.144824 1079484 ssh_runner.go:195] Run: which crictl
	I0729 18:45:44.148659 1079484 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:45:44.190278 1079484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:45:44.190358 1079484 ssh_runner.go:195] Run: crio --version
	I0729 18:45:44.217574 1079484 ssh_runner.go:195] Run: crio --version
	I0729 18:45:44.245365 1079484 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:45:44.246524 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:45:44.249231 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:44.249661 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:44.249689 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:44.249872 1079484 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:45:44.254299 1079484 kubeadm.go:883] updating cluster {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:45:44.254450 1079484 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:45:44.254512 1079484 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:45:44.295995 1079484 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:45:44.296020 1079484 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:45:44.296074 1079484 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:45:44.328505 1079484 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:45:44.328532 1079484 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:45:44.328542 1079484 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.30.3 crio true true} ...
	I0729 18:45:44.328668 1079484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:45:44.328734 1079484 ssh_runner.go:195] Run: crio config
	I0729 18:45:44.379175 1079484 cni.go:84] Creating CNI manager for ""
	I0729 18:45:44.379198 1079484 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:45:44.379211 1079484 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:45:44.379242 1079484 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344156 NodeName:ha-344156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:45:44.379437 1079484 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344156"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:45:44.379468 1079484 kube-vip.go:115] generating kube-vip config ...
	I0729 18:45:44.379519 1079484 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:45:44.391092 1079484 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:45:44.391209 1079484 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:45:44.391274 1079484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:45:44.400836 1079484 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:45:44.400889 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:45:44.410102 1079484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:45:44.426310 1079484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:45:44.443334 1079484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:45:44.459219 1079484 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:45:44.476061 1079484 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:45:44.479998 1079484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:45:44.619940 1079484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:45:44.636423 1079484 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.225
	I0729 18:45:44.636452 1079484 certs.go:194] generating shared ca certs ...
	I0729 18:45:44.636475 1079484 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.636682 1079484 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:45:44.636735 1079484 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:45:44.636745 1079484 certs.go:256] generating profile certs ...
	I0729 18:45:44.636830 1079484 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:45:44.636857 1079484 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63
	I0729 18:45:44.636870 1079484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.148 192.168.39.254]
	I0729 18:45:44.780083 1079484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 ...
	I0729 18:45:44.780116 1079484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63: {Name:mk667ece8e3d7b1d838f39c6e3f4cf7c263fa8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.780287 1079484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63 ...
	I0729 18:45:44.780300 1079484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63: {Name:mka73a374c6e3b586fcc88c17fa9989a2541ed90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.780367 1079484 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:45:44.780523 1079484 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:45:44.780665 1079484 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:45:44.780682 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:45:44.780696 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:45:44.780709 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:45:44.780724 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:45:44.780741 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:45:44.780757 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:45:44.780770 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:45:44.780782 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:45:44.780833 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:45:44.780860 1079484 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:45:44.780870 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:45:44.780892 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:45:44.780913 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:45:44.780937 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:45:44.780973 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:45:44.781000 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:45:44.781013 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:44.781026 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:45:44.781719 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:45:44.808742 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:45:44.832565 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:45:44.856036 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:45:44.878998 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:45:44.901105 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:45:44.923358 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:45:44.946114 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:45:44.968890 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:45:44.990970 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:45:45.013280 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:45:45.035914 1079484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:45:45.052097 1079484 ssh_runner.go:195] Run: openssl version
	I0729 18:45:45.057840 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:45:45.068700 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.073030 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.073072 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.078746 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:45:45.088661 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:45:45.099564 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.103791 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.103830 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.109555 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:45:45.119249 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:45:45.131709 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.136322 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.136374 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.142581 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:45:45.152839 1079484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:45:45.157360 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:45:45.163025 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:45:45.168573 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:45:45.174108 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:45:45.180107 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:45:45.185753 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:45:45.191586 1079484 kubeadm.go:392] StartCluster: {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:45:45.191733 1079484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:45:45.191780 1079484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:45:45.231326 1079484 cri.go:89] found id: "101cd31cb21fc963b197637a168589c1b941eb41979113dd3fb0f23cbfcb7d4f"
	I0729 18:45:45.231354 1079484 cri.go:89] found id: "ed53860c346f8c8f181a4e566342b169097f3d645e4e5dbc9162454b50b78e1b"
	I0729 18:45:45.231359 1079484 cri.go:89] found id: "fb1d68de4a07e66be33374c8c90edb7a386f4fb65e96c9bdb56e9fd90a9b4adc"
	I0729 18:45:45.231363 1079484 cri.go:89] found id: "ce856c69ecf84e714da35cd579fd1fe8602ffe85be37c3fcb4703a31b2cb6d6d"
	I0729 18:45:45.231365 1079484 cri.go:89] found id: "1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67"
	I0729 18:45:45.231368 1079484 cri.go:89] found id: "7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6"
	I0729 18:45:45.231370 1079484 cri.go:89] found id: "88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62"
	I0729 18:45:45.231373 1079484 cri.go:89] found id: "ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b"
	I0729 18:45:45.231375 1079484 cri.go:89] found id: "df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79"
	I0729 18:45:45.231384 1079484 cri.go:89] found id: "cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4"
	I0729 18:45:45.231386 1079484 cri.go:89] found id: "15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4"
	I0729 18:45:45.231389 1079484 cri.go:89] found id: "fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29"
	I0729 18:45:45.231391 1079484 cri.go:89] found id: "24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472"
	I0729 18:45:45.231394 1079484 cri.go:89] found id: ""
	I0729 18:45:45.231444 1079484 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.524136717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278897524109906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8a7d501-a014-4816-96ff-26b9a67a57ce name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.524978551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cf01e8a-9121-42ca-88b9-2816d9f3e9c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.525047917Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cf01e8a-9121-42ca-88b9-2816d9f3e9c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.525565769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cf01e8a-9121-42ca-88b9-2816d9f3e9c5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.569192959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee93b200-64e7-4976-b747-1cf945422a05 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.569318333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee93b200-64e7-4976-b747-1cf945422a05 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.571040951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d0911ab-e2e4-4904-8628-9a49a096d47a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.571602032Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278897571576003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d0911ab-e2e4-4904-8628-9a49a096d47a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.572224708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ebeaebf-1bf8-411e-be3c-d7ac08535e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.572333800Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ebeaebf-1bf8-411e-be3c-d7ac08535e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.572742590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ebeaebf-1bf8-411e-be3c-d7ac08535e89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.631383736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd636585-7c63-4b3a-ba76-a6f97bef97ae name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.631475510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd636585-7c63-4b3a-ba76-a6f97bef97ae name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.632789206Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=747138bf-ea3d-4c5b-aa29-ff2d719debcd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.633265535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278897633242498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=747138bf-ea3d-4c5b-aa29-ff2d719debcd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.634093053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62b1ab99-a52e-4fd0-a339-01f1108a2fb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.634150417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62b1ab99-a52e-4fd0-a339-01f1108a2fb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.634607564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62b1ab99-a52e-4fd0-a339-01f1108a2fb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.679789126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b209e91-7b0a-44d4-99cb-e188795051b1 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.679888990Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b209e91-7b0a-44d4-99cb-e188795051b1 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.680715952Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8b307a2-2c31-4ab0-8808-5d8b2b5f6b5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.681210977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722278897681187006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8b307a2-2c31-4ab0-8808-5d8b2b5f6b5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.681698240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d718f0a-258a-4c1f-8f9d-0ff8a89400f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.681791731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d718f0a-258a-4c1f-8f9d-0ff8a89400f7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:48:17 ha-344156 crio[3799]: time="2024-07-29 18:48:17.682262378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d718f0a-258a-4c1f-8f9d-0ff8a89400f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b174523a06ec7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   aa7e4dbfa154a       storage-provisioner
	f5c331c2db87d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   f37cc1a23ea4c       kube-controller-manager-ha-344156
	249837d4bf048       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   502681a3f7a5d       kube-apiserver-ha-344156
	b5fd2655106bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e975cb200a028       busybox-fc5497c4f-9sbfq
	c025231c68b98       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   333832b2e557c       kube-vip-ha-344156
	81184078df7be       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   12494ac147e1d       kube-proxy-gp282
	7a8271452e018       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   5459cf266e338       coredns-7db6d8ff4d-5slmg
	4260fb67ddc41       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   aa7e4dbfa154a       storage-provisioner
	ab40f6e9b301a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   86a1e64fb3784       kindnet-84nqp
	a37174e321aa6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   453b2b6892cf0       coredns-7db6d8ff4d-h5h7v
	caa8059749c41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   f37cc1a23ea4c       kube-controller-manager-ha-344156
	c373f53a9dbd4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   502681a3f7a5d       kube-apiserver-ha-344156
	2297dbc5667b8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   d1126d0597b32       kube-scheduler-ha-344156
	182486140ce87       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   118dbdb3a4687       etcd-ha-344156
	d152449ddedd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   98fcabecdf16c       busybox-fc5497c4f-9sbfq
	1a4d13ace439f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   331a36b1d7af6       coredns-7db6d8ff4d-h5h7v
	7d0acef755a4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   3bc8a1c2175a3       coredns-7db6d8ff4d-5slmg
	88c61cb999665       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   5312fee5fcd07       kindnet-84nqp
	ea6501e2c6d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   f041673054c6d       kube-proxy-gp282
	cea7dd8ee7d18       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   ec39a320a672e       kube-scheduler-ha-344156
	fc27c145e7b72       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   5e0320966c0af       etcd-ha-344156
	
	
	==> coredns [1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67] <==
	[INFO] 10.244.1.2:41663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101756s
	[INFO] 10.244.2.2:42699 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103084s
	[INFO] 10.244.2.2:43982 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096471s
	[INFO] 10.244.2.2:48234 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064109s
	[INFO] 10.244.2.2:58544 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127531s
	[INFO] 10.244.2.2:43646 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097904s
	[INFO] 10.244.0.4:41454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007042s
	[INFO] 10.244.1.2:56019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130286s
	[INFO] 10.244.1.2:49552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000419229s
	[INFO] 10.244.1.2:42570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019871s
	[INFO] 10.244.1.2:35841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085394s
	[INFO] 10.244.2.2:38179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154252s
	[INFO] 10.244.2.2:54595 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095931s
	[INFO] 10.244.0.4:52521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102943s
	[INFO] 10.244.0.4:41421 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122912s
	[INFO] 10.244.1.2:51311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000262883s
	[INFO] 10.244.1.2:51083 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108384s
	[INFO] 10.244.2.2:49034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138814s
	[INFO] 10.244.2.2:33015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141033s
	[INFO] 10.244.2.2:33854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124542s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[243480641]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:46:03.270) (total time: 12564ms):
	Trace[243480641]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer 12564ms (18:46:15.834)
	Trace[243480641]: [12.56422755s] [12.56422755s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6] <==
	[INFO] 10.244.1.2:47729 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001573362s
	[INFO] 10.244.2.2:32959 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001674804s
	[INFO] 10.244.0.4:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137454s
	[INFO] 10.244.0.4:45474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003415625s
	[INFO] 10.244.0.4:42044 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293336s
	[INFO] 10.244.0.4:42246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000257435s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621784s
	[INFO] 10.244.1.2:47789 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179788s
	[INFO] 10.244.1.2:51271 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115306s
	[INFO] 10.244.1.2:60584 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160548s
	[INFO] 10.244.2.2:39080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143675s
	[INFO] 10.244.2.2:57667 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587169s
	[INFO] 10.244.2.2:36002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000958528s
	[INFO] 10.244.0.4:46689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001122s
	[INFO] 10.244.0.4:53528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068803s
	[INFO] 10.244.0.4:58879 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007922s
	[INFO] 10.244.2.2:40671 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165257s
	[INFO] 10.244.2.2:52385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072909s
	[INFO] 10.244.0.4:40200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101268s
	[INFO] 10.244.0.4:60214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092204s
	[INFO] 10.244.1.2:45394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209017s
	[INFO] 10.244.1.2:53252 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072648s
	[INFO] 10.244.2.2:37567 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168035s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2030692405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:46:02.817) (total time: 10575ms):
	Trace[2030692405]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer 10575ms (18:46:13.392)
	Trace[2030692405]: [10.575095883s] [10.575095883s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39438->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39438->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-344156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:34:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-344156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be7f4c1228de4ae58c65b2a0531270c4
	  System UUID:                be7f4c12-28de-4ae5-8c65-b2a0531270c4
	  Boot ID:                    14c798b1-a7f8-4045-a5cc-f99e886c885f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sbfq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-5slmg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-h5h7v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-344156                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-84nqp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-344156             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-344156    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-gp282                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-344156             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-344156                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 103s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-344156 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-344156 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-344156 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-344156 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Warning  ContainerGCFailed        2m55s (x2 over 3m55s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           99s                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           89s                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           27s                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	
	
	Name:               ha-344156-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:35:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-344156-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae271825042248168626e86031e0e80b
	  System UUID:                ae271825-0422-4816-8626-e86031e0e80b
	  Boot ID:                    175ac7df-0ca0-443e-95dd-097c6a227ea2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np547                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-344156-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-b85cc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-344156-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-344156-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-4p5r9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-344156-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-344156-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 97s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  NodeNotReady             8m58s                  node-controller  Node ha-344156-m02 status is now: NodeNotReady
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x7 over 2m10s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           89s                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           27s                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	
	
	Name:               ha-344156-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_36_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:47:54 +0000   Mon, 29 Jul 2024 18:47:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:47:54 +0000   Mon, 29 Jul 2024 18:47:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:47:54 +0000   Mon, 29 Jul 2024 18:47:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:47:54 +0000   Mon, 29 Jul 2024 18:47:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    ha-344156-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 009a6c7b1b2049db970288d43db02f16
	  System UUID:                009a6c7b-1b20-49db-9702-88d43db02f16
	  Boot ID:                    4d2183b8-b8b5-438a-ace1-5d47e3a9a9e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-q7sxh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-344156-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-ks57n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-344156-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-344156-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-w68jl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-344156-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-344156-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 37s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-344156-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal   RegisteredNode           99s                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-344156-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-344156-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-344156-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 55s                kubelet          Node ha-344156-m03 has been rebooted, boot id: 4d2183b8-b8b5-438a-ace1-5d47e3a9a9e6
	  Normal   NodeReady                55s                kubelet          Node ha-344156-m03 status is now: NodeReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-344156-m03 event: Registered Node ha-344156-m03 in Controller
	
	
	Name:               ha-344156-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_37_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:37:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:48:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:48:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:48:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:48:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-344156-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd3c9a6740fc4ec3a7f2c8b9b2357693
	  System UUID:                cd3c9a67-40fc-4ec3-a7f2-c8b9b2357693
	  Boot ID:                    165c6764-1793-422a-825b-5056b2e78975
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-c84jp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-qjzd6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   NodeReady                9m55s              kubelet          Node ha-344156-m04 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   NodeNotReady             59s                node-controller  Node ha-344156-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           27s                node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-344156-m04 has been rebooted, boot id: 165c6764-1793-422a-825b-5056b2e78975
	  Normal   NodeReady                8s                 kubelet          Node ha-344156-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.055622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058895] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.187111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118732] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.257910] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.135704] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.319915] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.051986] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.074788] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.534370] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.052219] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 18:35] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 18:42] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 18:45] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	[  +0.144364] systemd-fstab-generator[3728]: Ignoring "noauto" option for root device
	[  +0.200136] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.148656] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.272619] systemd-fstab-generator[3783]: Ignoring "noauto" option for root device
	[  +8.529726] systemd-fstab-generator[3887]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.969540] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 18:46] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.062161] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.299490] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b] <==
	{"level":"warn","ts":"2024-07-29T18:47:25.278409Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a4672411638cebf","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:27.021004Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:27.024719Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:29.280396Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"3a4672411638cebf","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:29.280484Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a4672411638cebf","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:32.022041Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:32.025343Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:33.282956Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.148:2380/version","remote-member-id":"3a4672411638cebf","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:33.283043Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3a4672411638cebf","error":"Get \"https://192.168.39.148:2380/version\": dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:35.62424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.900628ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10100888967645672602 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2572 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:368 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T18:47:35.624575Z","caller":"traceutil/trace.go:171","msg":"trace[1799990058] transaction","detail":"{read_only:false; response_revision:2576; number_of_response:1; }","duration":"398.653804ms","start":"2024-07-29T18:47:35.225884Z","end":"2024-07-29T18:47:35.624538Z","steps":["trace[1799990058] 'process raft request'  (duration: 198.70187ms)","trace[1799990058] 'compare'  (duration: 198.671624ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T18:47:35.624743Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:47:35.225865Z","time spent":"398.773468ms","remote":"127.0.0.1:60966","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":418,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/plndr-cp-lock\" mod_revision:2572 > success:<request_put:<key:\"/registry/leases/kube-system/plndr-cp-lock\" value_size:368 >> failure:<request_range:<key:\"/registry/leases/kube-system/plndr-cp-lock\" > >"}
	{"level":"info","ts":"2024-07-29T18:47:36.099911Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:47:36.119518Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fb0a52f06b768c2d","to":"3a4672411638cebf","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T18:47:36.119636Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:47:36.119993Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:47:36.14435Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:47:36.152592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fb0a52f06b768c2d","to":"3a4672411638cebf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T18:47:36.152674Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:47:36.165412Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:36.170634Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:36.180979Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:37.022649Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:37.025854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T18:47:42.438931Z","caller":"traceutil/trace.go:171","msg":"trace[864364412] transaction","detail":"{read_only:false; response_revision:2613; number_of_response:1; }","duration":"103.23738ms","start":"2024-07-29T18:47:42.335668Z","end":"2024-07-29T18:47:42.438905Z","steps":["trace[864364412] 'process raft request'  (duration: 103.102916ms)"],"step_count":1}
	
	
	==> etcd [fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29] <==
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.745328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:44:02.719675Z","time spent":"1.025591002s","remote":"127.0.0.1:33116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 "}
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.745654Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:44:02.721757Z","time spent":"1.023884919s","remote":"127.0.0.1:33186","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.787733Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.225:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:44:03.787782Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.225:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T18:44:03.787844Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fb0a52f06b768c2d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T18:44:03.788034Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788069Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788096Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788226Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788328Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.78841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788443Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788451Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788481Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788551Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788599Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788691Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788728Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.792232Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.225:2380"}
	{"level":"info","ts":"2024-07-29T18:44:03.792423Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.225:2380"}
	{"level":"info","ts":"2024-07-29T18:44:03.792435Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-344156","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.225:2380"],"advertise-client-urls":["https://192.168.39.225:2379"]}
	
	
	==> kernel <==
	 18:48:18 up 14 min,  0 users,  load average: 0.28, 0.36, 0.27
	Linux ha-344156 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62] <==
	I0729 18:43:39.798953       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:39.799133       1 main.go:299] handling current node
	I0729 18:43:39.799235       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:39.799268       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:39.799558       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:39.799634       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:39.799739       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:39.799761       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:49.798944       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:49.799004       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:49.799200       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:49.799227       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:49.799345       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:49.799353       1 main.go:299] handling current node
	I0729 18:43:49.799367       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:49.799371       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:59.798908       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:59.799081       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:59.799248       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:59.799271       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:59.799452       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:59.799474       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:59.799535       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:59.799554       1 main.go:299] handling current node
	E0729 18:44:01.728415       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e] <==
	I0729 18:47:42.589253       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:47:52.579861       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:47:52.579962       1 main.go:299] handling current node
	I0729 18:47:52.579989       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:47:52.579995       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:47:52.580157       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:47:52.580183       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:47:52.580349       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:47:52.580390       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:48:02.581113       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:48:02.581440       1 main.go:299] handling current node
	I0729 18:48:02.581592       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:48:02.581717       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:48:02.582015       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:48:02.582095       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:48:02.582400       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:48:02.582524       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:48:12.580484       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:48:12.580682       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:48:12.580914       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:48:12.580952       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:48:12.581057       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:48:12.581089       1 main.go:299] handling current node
	I0729 18:48:12.581128       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:48:12.581155       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b] <==
	I0729 18:46:30.910893       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 18:46:30.910902       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 18:46:30.911738       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0729 18:46:31.008751       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:46:31.008792       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:46:31.008797       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:46:31.010430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:46:31.016677       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 18:46:31.020502       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.249]
	I0729 18:46:31.025738       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:46:31.033679       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:46:31.033721       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:46:31.033736       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:46:31.033752       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:46:31.033775       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:46:31.033780       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:46:31.039499       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:46:31.042789       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:46:31.042824       1 policy_source.go:224] refreshing policies
	I0729 18:46:31.081128       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:46:31.121850       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:46:31.129054       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:46:31.131990       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 18:46:31.913823       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 18:46:32.248614       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.225 192.168.39.249]
	
	
	==> kube-apiserver [c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5] <==
	I0729 18:45:51.731876       1 options.go:221] external host was not specified, using 192.168.39.225
	I0729 18:45:51.735046       1 server.go:148] Version: v1.30.3
	I0729 18:45:51.735738       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:45:52.362436       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:45:52.363034       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:45:52.380160       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:45:52.383497       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:45:52.383851       1 instance.go:299] Using reconciler: lease
	W0729 18:46:12.357533       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 18:46:12.357268       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 18:46:12.385469       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268] <==
	I0729 18:45:52.681242       1 serving.go:380] Generated self-signed cert in-memory
	I0729 18:45:52.956462       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 18:45:52.956499       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:45:52.960805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:45:52.961852       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:45:52.961897       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:45:52.961924       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 18:46:13.391682       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.225:8443/healthz\": dial tcp 192.168.39.225:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de] <==
	I0729 18:46:49.101839       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 18:46:49.154546       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 18:46:49.207568       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 18:46:49.214729       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 18:46:49.232143       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 18:46:49.234534       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 18:46:49.238375       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:46:49.279220       1 shared_informer.go:320] Caches are synced for expand
	I0729 18:46:49.279400       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 18:46:49.280455       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 18:46:49.282540       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 18:46:49.665615       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:46:49.690045       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 18:46:49.690127       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 18:47:00.447196       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-8t88s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-8t88s\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 18:47:00.447395       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"6c4c2425-5c30-431c-986f-d6adfb49fa73", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-8t88s EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-8t88s": the object has been modified; please apply your changes to the latest version and try again
	I0729 18:47:00.465483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.502201ms"
	I0729 18:47:00.465731       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.007µs"
	I0729 18:47:19.924940       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	I0729 18:47:20.137822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.686836ms"
	I0729 18:47:20.138533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="130.075µs"
	I0729 18:47:24.654596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.128µs"
	I0729 18:47:42.477640       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.912929ms"
	I0729 18:47:42.477931       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.849µs"
	I0729 18:48:10.274203       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	
	
	==> kube-proxy [81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0] <==
	I0729 18:45:53.018265       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:45:55.545867       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:45:58.618619       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:01.689888       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:07.834731       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:17.050197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 18:46:34.273438       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	I0729 18:46:34.310321       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:46:34.310417       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:46:34.310455       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:46:34.313224       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:46:34.313688       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:46:34.313770       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:34.315360       1 config.go:192] "Starting service config controller"
	I0729 18:46:34.315422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:46:34.315463       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:46:34.315479       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:46:34.316227       1 config.go:319] "Starting node config controller"
	I0729 18:46:34.317231       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:46:34.416233       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:46:34.416341       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:46:34.417654       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b] <==
	E0729 18:42:38.233826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:38.233735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:38.233929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.761802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.761900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.762010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:54.297726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:54.297861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:54.297964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:54.298072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:57.370400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:57.370505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:18.875032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:18.875207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:18.875032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:18.875369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:21.946583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:21.947480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:52.667403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:52.667666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:58.811328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:58.811601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf] <==
	W0729 18:46:22.091028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.091199       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:22.237837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.225:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.237970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.225:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:22.332729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.332842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:22.888508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.225:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.888648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.225:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.070231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.225:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.070348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.225:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.248131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.248219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.293085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.225:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.293146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.225:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:27.924491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:27.924635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:28.477860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.225:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:28.477955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.225:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:29.270641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:29.270695       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:30.956603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:46:30.956669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:46:30.956758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:46:30.956793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 18:46:33.999626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4] <==
	W0729 18:43:56.982199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:43:56.982354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:43:57.221472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 18:43:57.221531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 18:43:57.908653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:43:57.908738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 18:43:57.917575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:57.917660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 18:43:58.012160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:43:58.012354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:43:58.128650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:43:58.128736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 18:43:58.261255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:43:58.261360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 18:43:58.678215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:43:58.678382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 18:43:58.683554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:58.683666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:43:58.738508       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:43:58.738639       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:43:59.206732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:59.206870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:44:03.653475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:44:03.653525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:44:03.705192       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:46:26 ha-344156 kubelet[1368]: E0729 18:46:26.265673    1368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-344156\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:26 ha-344156 kubelet[1368]: I0729 18:46:26.266397    1368 status_manager.go:853] "Failed to get status for pod" podUID="67610b75999e06603675bc1a64d5ef7d" pod="kube-system/etcd-ha-344156" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: I0729 18:46:29.101811    1368 scope.go:117] "RemoveContainer" containerID="c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: I0729 18:46:29.104731    1368 scope.go:117] "RemoveContainer" containerID="4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: E0729 18:46:29.105910    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3ea00f25-122f-4a18-9d69-3606cfddf4d9)\"" pod="kube-system/storage-provisioner" podUID="3ea00f25-122f-4a18-9d69-3606cfddf4d9"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: E0729 18:46:29.337609    1368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-344156?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: W0729 18:46:29.337625    1368 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2000": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 18:46:29 ha-344156 kubelet[1368]: E0729 18:46:29.337709    1368 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2000": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 18:46:29 ha-344156 kubelet[1368]: I0729 18:46:29.337709    1368 status_manager.go:853] "Failed to get status for pod" podUID="abf94303-b608-45b5-ae8b-9288be614a8f" pod="kube-system/kube-proxy-gp282" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gp282\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:29 ha-344156 kubelet[1368]: E0729 18:46:29.337781    1368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-344156\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:32 ha-344156 kubelet[1368]: I0729 18:46:32.409673    1368 status_manager.go:853] "Failed to get status for pod" podUID="3ea00f25-122f-4a18-9d69-3606cfddf4d9" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:32 ha-344156 kubelet[1368]: E0729 18:46:32.410741    1368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-344156\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 18:46:32 ha-344156 kubelet[1368]: E0729 18:46:32.411378    1368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-344156.17e6c332797a105a  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-344156,UID:d61da37ea38b5727b5710cdad0fc95fd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-344156,},FirstTimestamp:2024-07-29 18:42:09.069617242 +0000 UTC m=+466.097588198,LastTimestamp:2024-07-29 18:42:09.069617242 +0000 UTC m=+466.097588198,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-344156,}"
	Jul 29 18:46:37 ha-344156 kubelet[1368]: I0729 18:46:37.101698    1368 scope.go:117] "RemoveContainer" containerID="caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268"
	Jul 29 18:46:42 ha-344156 kubelet[1368]: I0729 18:46:42.101219    1368 scope.go:117] "RemoveContainer" containerID="4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91"
	Jul 29 18:46:42 ha-344156 kubelet[1368]: E0729 18:46:42.101862    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3ea00f25-122f-4a18-9d69-3606cfddf4d9)\"" pod="kube-system/storage-provisioner" podUID="3ea00f25-122f-4a18-9d69-3606cfddf4d9"
	Jul 29 18:46:54 ha-344156 kubelet[1368]: I0729 18:46:54.101119    1368 scope.go:117] "RemoveContainer" containerID="4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91"
	Jul 29 18:47:23 ha-344156 kubelet[1368]: E0729 18:47:23.117784    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:47:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:47:35 ha-344156 kubelet[1368]: I0729 18:47:35.102036    1368 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344156" podUID="586052c5-c670-4957-b052-e2a7bf8bafb2"
	Jul 29 18:47:35 ha-344156 kubelet[1368]: I0729 18:47:35.134926    1368 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-344156"
	Jul 29 18:47:36 ha-344156 kubelet[1368]: I0729 18:47:36.030503    1368 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344156" podUID="586052c5-c670-4957-b052-e2a7bf8bafb2"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:48:17.243932 1080853 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344156 -n ha-344156
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 stop -v=7 --alsologtostderr
E0729 18:50:34.135162 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 stop -v=7 --alsologtostderr: exit status 82 (2m0.468812926s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344156-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:48:36.765833 1081248 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:48:36.766109 1081248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:48:36.766122 1081248 out.go:304] Setting ErrFile to fd 2...
	I0729 18:48:36.766129 1081248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:48:36.766397 1081248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:48:36.766662 1081248 out.go:298] Setting JSON to false
	I0729 18:48:36.766751 1081248 mustload.go:65] Loading cluster: ha-344156
	I0729 18:48:36.767242 1081248 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:48:36.767368 1081248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:48:36.767619 1081248 mustload.go:65] Loading cluster: ha-344156
	I0729 18:48:36.767799 1081248 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:48:36.767835 1081248 stop.go:39] StopHost: ha-344156-m04
	I0729 18:48:36.768417 1081248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:48:36.768471 1081248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:48:36.784878 1081248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37253
	I0729 18:48:36.785402 1081248 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:48:36.785959 1081248 main.go:141] libmachine: Using API Version  1
	I0729 18:48:36.785983 1081248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:48:36.786300 1081248 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:48:36.789084 1081248 out.go:177] * Stopping node "ha-344156-m04"  ...
	I0729 18:48:36.790397 1081248 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 18:48:36.790424 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:48:36.790670 1081248 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 18:48:36.790693 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:48:36.793638 1081248 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:48:36.794015 1081248 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:48:05 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:48:36.794051 1081248 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:48:36.794168 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:48:36.794355 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:48:36.794535 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:48:36.794663 1081248 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	I0729 18:48:36.877714 1081248 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 18:48:36.931046 1081248 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 18:48:36.983698 1081248 main.go:141] libmachine: Stopping "ha-344156-m04"...
	I0729 18:48:36.983731 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:48:36.985270 1081248 main.go:141] libmachine: (ha-344156-m04) Calling .Stop
	I0729 18:48:36.988949 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 0/120
	I0729 18:48:37.990494 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 1/120
	I0729 18:48:38.991820 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 2/120
	I0729 18:48:39.993281 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 3/120
	I0729 18:48:40.994585 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 4/120
	I0729 18:48:41.996523 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 5/120
	I0729 18:48:42.997772 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 6/120
	I0729 18:48:43.999214 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 7/120
	I0729 18:48:45.000407 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 8/120
	I0729 18:48:46.001747 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 9/120
	I0729 18:48:47.004006 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 10/120
	I0729 18:48:48.005558 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 11/120
	I0729 18:48:49.006944 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 12/120
	I0729 18:48:50.008367 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 13/120
	I0729 18:48:51.009792 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 14/120
	I0729 18:48:52.011277 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 15/120
	I0729 18:48:53.012696 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 16/120
	I0729 18:48:54.014028 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 17/120
	I0729 18:48:55.015443 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 18/120
	I0729 18:48:56.017225 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 19/120
	I0729 18:48:57.019056 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 20/120
	I0729 18:48:58.021258 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 21/120
	I0729 18:48:59.022826 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 22/120
	I0729 18:49:00.024491 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 23/120
	I0729 18:49:01.025753 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 24/120
	I0729 18:49:02.027158 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 25/120
	I0729 18:49:03.028486 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 26/120
	I0729 18:49:04.029810 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 27/120
	I0729 18:49:05.032303 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 28/120
	I0729 18:49:06.033891 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 29/120
	I0729 18:49:07.035989 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 30/120
	I0729 18:49:08.037166 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 31/120
	I0729 18:49:09.038945 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 32/120
	I0729 18:49:10.040318 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 33/120
	I0729 18:49:11.042120 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 34/120
	I0729 18:49:12.044014 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 35/120
	I0729 18:49:13.046347 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 36/120
	I0729 18:49:14.047535 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 37/120
	I0729 18:49:15.049603 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 38/120
	I0729 18:49:16.051005 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 39/120
	I0729 18:49:17.052920 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 40/120
	I0729 18:49:18.055010 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 41/120
	I0729 18:49:19.056331 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 42/120
	I0729 18:49:20.058084 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 43/120
	I0729 18:49:21.059432 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 44/120
	I0729 18:49:22.061399 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 45/120
	I0729 18:49:23.062621 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 46/120
	I0729 18:49:24.064260 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 47/120
	I0729 18:49:25.065636 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 48/120
	I0729 18:49:26.067085 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 49/120
	I0729 18:49:27.068907 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 50/120
	I0729 18:49:28.070087 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 51/120
	I0729 18:49:29.071574 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 52/120
	I0729 18:49:30.072839 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 53/120
	I0729 18:49:31.074094 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 54/120
	I0729 18:49:32.075547 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 55/120
	I0729 18:49:33.077727 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 56/120
	I0729 18:49:34.079092 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 57/120
	I0729 18:49:35.081526 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 58/120
	I0729 18:49:36.082803 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 59/120
	I0729 18:49:37.084967 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 60/120
	I0729 18:49:38.086297 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 61/120
	I0729 18:49:39.087599 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 62/120
	I0729 18:49:40.089152 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 63/120
	I0729 18:49:41.090459 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 64/120
	I0729 18:49:42.092177 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 65/120
	I0729 18:49:43.093745 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 66/120
	I0729 18:49:44.095487 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 67/120
	I0729 18:49:45.097396 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 68/120
	I0729 18:49:46.098938 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 69/120
	I0729 18:49:47.101146 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 70/120
	I0729 18:49:48.102696 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 71/120
	I0729 18:49:49.104117 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 72/120
	I0729 18:49:50.105365 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 73/120
	I0729 18:49:51.107629 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 74/120
	I0729 18:49:52.109385 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 75/120
	I0729 18:49:53.111272 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 76/120
	I0729 18:49:54.112549 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 77/120
	I0729 18:49:55.113691 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 78/120
	I0729 18:49:56.114989 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 79/120
	I0729 18:49:57.117086 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 80/120
	I0729 18:49:58.118334 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 81/120
	I0729 18:49:59.119789 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 82/120
	I0729 18:50:00.121856 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 83/120
	I0729 18:50:01.123305 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 84/120
	I0729 18:50:02.125163 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 85/120
	I0729 18:50:03.126620 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 86/120
	I0729 18:50:04.128167 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 87/120
	I0729 18:50:05.130153 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 88/120
	I0729 18:50:06.131656 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 89/120
	I0729 18:50:07.133938 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 90/120
	I0729 18:50:08.136051 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 91/120
	I0729 18:50:09.137345 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 92/120
	I0729 18:50:10.138792 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 93/120
	I0729 18:50:11.141201 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 94/120
	I0729 18:50:12.142724 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 95/120
	I0729 18:50:13.144150 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 96/120
	I0729 18:50:14.145495 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 97/120
	I0729 18:50:15.147083 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 98/120
	I0729 18:50:16.148441 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 99/120
	I0729 18:50:17.150727 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 100/120
	I0729 18:50:18.152137 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 101/120
	I0729 18:50:19.154155 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 102/120
	I0729 18:50:20.155518 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 103/120
	I0729 18:50:21.157263 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 104/120
	I0729 18:50:22.159128 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 105/120
	I0729 18:50:23.161440 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 106/120
	I0729 18:50:24.162726 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 107/120
	I0729 18:50:25.164022 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 108/120
	I0729 18:50:26.165529 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 109/120
	I0729 18:50:27.167083 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 110/120
	I0729 18:50:28.169461 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 111/120
	I0729 18:50:29.171004 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 112/120
	I0729 18:50:30.172652 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 113/120
	I0729 18:50:31.174631 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 114/120
	I0729 18:50:32.176438 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 115/120
	I0729 18:50:33.177869 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 116/120
	I0729 18:50:34.179172 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 117/120
	I0729 18:50:35.180707 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 118/120
	I0729 18:50:36.182075 1081248 main.go:141] libmachine: (ha-344156-m04) Waiting for machine to stop 119/120
	I0729 18:50:37.183163 1081248 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 18:50:37.183247 1081248 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 18:50:37.184963 1081248 out.go:177] 
	W0729 18:50:37.186090 1081248 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 18:50:37.186108 1081248 out.go:239] * 
	* 
	W0729 18:50:37.189766 1081248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 18:50:37.190836 1081248 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-344156 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr: exit status 3 (18.926780896s)

                                                
                                                
-- stdout --
	ha-344156
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344156-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:50:37.236795 1081729 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:50:37.237040 1081729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:50:37.237048 1081729 out.go:304] Setting ErrFile to fd 2...
	I0729 18:50:37.237052 1081729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:50:37.237700 1081729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:50:37.238028 1081729 out.go:298] Setting JSON to false
	I0729 18:50:37.238090 1081729 mustload.go:65] Loading cluster: ha-344156
	I0729 18:50:37.238131 1081729 notify.go:220] Checking for updates...
	I0729 18:50:37.238680 1081729 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:50:37.238700 1081729 status.go:255] checking status of ha-344156 ...
	I0729 18:50:37.239155 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.239226 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.254698 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
	I0729 18:50:37.255116 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.255695 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.255715 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.256063 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.256256 1081729 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:50:37.257788 1081729 status.go:330] ha-344156 host status = "Running" (err=<nil>)
	I0729 18:50:37.257805 1081729 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:50:37.258124 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.258162 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.272510 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0729 18:50:37.272958 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.273431 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.273455 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.273767 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.273933 1081729 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:50:37.276440 1081729 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:50:37.276867 1081729 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:50:37.276894 1081729 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:50:37.277032 1081729 host.go:66] Checking if "ha-344156" exists ...
	I0729 18:50:37.277298 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.277333 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.292203 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0729 18:50:37.292570 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.292982 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.292997 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.293238 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.293419 1081729 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:50:37.293614 1081729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:50:37.293655 1081729 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:50:37.296138 1081729 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:50:37.296529 1081729 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:50:37.296552 1081729 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:50:37.296669 1081729 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:50:37.296815 1081729 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:50:37.296952 1081729 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:50:37.297057 1081729 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:50:37.383545 1081729 ssh_runner.go:195] Run: systemctl --version
	I0729 18:50:37.390433 1081729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:50:37.407032 1081729 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:50:37.407061 1081729 api_server.go:166] Checking apiserver status ...
	I0729 18:50:37.407095 1081729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:50:37.422632 1081729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5034/cgroup
	W0729 18:50:37.431919 1081729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5034/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:50:37.431963 1081729 ssh_runner.go:195] Run: ls
	I0729 18:50:37.435941 1081729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:50:37.441969 1081729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:50:37.441995 1081729 status.go:422] ha-344156 apiserver status = Running (err=<nil>)
	I0729 18:50:37.442016 1081729 status.go:257] ha-344156 status: &{Name:ha-344156 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:50:37.442035 1081729 status.go:255] checking status of ha-344156-m02 ...
	I0729 18:50:37.442346 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.442397 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.457213 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0729 18:50:37.457690 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.458240 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.458264 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.458563 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.458752 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetState
	I0729 18:50:37.460479 1081729 status.go:330] ha-344156-m02 host status = "Running" (err=<nil>)
	I0729 18:50:37.460506 1081729 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:50:37.460876 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.460923 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.475336 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35011
	I0729 18:50:37.475699 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.476160 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.476187 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.476525 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.476768 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetIP
	I0729 18:50:37.479454 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:50:37.479911 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:45:56 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:50:37.479936 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:50:37.480100 1081729 host.go:66] Checking if "ha-344156-m02" exists ...
	I0729 18:50:37.480522 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.480568 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.494926 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45525
	I0729 18:50:37.495258 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.495692 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.495717 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.495986 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.496150 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .DriverName
	I0729 18:50:37.496306 1081729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:50:37.496323 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHHostname
	I0729 18:50:37.498920 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:50:37.499298 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:a3:97", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:45:56 +0000 UTC Type:0 Mac:52:54:00:99:a3:97 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:ha-344156-m02 Clientid:01:52:54:00:99:a3:97}
	I0729 18:50:37.499330 1081729 main.go:141] libmachine: (ha-344156-m02) DBG | domain ha-344156-m02 has defined IP address 192.168.39.249 and MAC address 52:54:00:99:a3:97 in network mk-ha-344156
	I0729 18:50:37.499443 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHPort
	I0729 18:50:37.499619 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHKeyPath
	I0729 18:50:37.499756 1081729 main.go:141] libmachine: (ha-344156-m02) Calling .GetSSHUsername
	I0729 18:50:37.499875 1081729 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m02/id_rsa Username:docker}
	I0729 18:50:37.589189 1081729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 18:50:37.611059 1081729 kubeconfig.go:125] found "ha-344156" server: "https://192.168.39.254:8443"
	I0729 18:50:37.611090 1081729 api_server.go:166] Checking apiserver status ...
	I0729 18:50:37.611127 1081729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 18:50:37.627651 1081729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup
	W0729 18:50:37.638837 1081729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1541/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 18:50:37.638903 1081729 ssh_runner.go:195] Run: ls
	I0729 18:50:37.643393 1081729 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 18:50:37.647663 1081729 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 18:50:37.647686 1081729 status.go:422] ha-344156-m02 apiserver status = Running (err=<nil>)
	I0729 18:50:37.647694 1081729 status.go:257] ha-344156-m02 status: &{Name:ha-344156-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 18:50:37.647712 1081729 status.go:255] checking status of ha-344156-m04 ...
	I0729 18:50:37.648066 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.648125 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.663525 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I0729 18:50:37.663964 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.664531 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.664559 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.664955 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.665185 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetState
	I0729 18:50:37.666757 1081729 status.go:330] ha-344156-m04 host status = "Running" (err=<nil>)
	I0729 18:50:37.666778 1081729 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:50:37.667228 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.667274 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.684915 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0729 18:50:37.685392 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.685934 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.685953 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.686317 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.686518 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetIP
	I0729 18:50:37.689617 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:50:37.690103 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:48:05 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:50:37.690136 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:50:37.690242 1081729 host.go:66] Checking if "ha-344156-m04" exists ...
	I0729 18:50:37.690547 1081729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:50:37.690599 1081729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:50:37.705527 1081729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0729 18:50:37.705927 1081729 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:50:37.706357 1081729 main.go:141] libmachine: Using API Version  1
	I0729 18:50:37.706376 1081729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:50:37.706739 1081729 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:50:37.706937 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .DriverName
	I0729 18:50:37.707149 1081729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 18:50:37.707171 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHHostname
	I0729 18:50:37.709838 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:50:37.710222 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:8a:b9", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:48:05 +0000 UTC Type:0 Mac:52:54:00:8a:8a:b9 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:ha-344156-m04 Clientid:01:52:54:00:8a:8a:b9}
	I0729 18:50:37.710239 1081729 main.go:141] libmachine: (ha-344156-m04) DBG | domain ha-344156-m04 has defined IP address 192.168.39.9 and MAC address 52:54:00:8a:8a:b9 in network mk-ha-344156
	I0729 18:50:37.710360 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHPort
	I0729 18:50:37.710513 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHKeyPath
	I0729 18:50:37.710702 1081729 main.go:141] libmachine: (ha-344156-m04) Calling .GetSSHUsername
	I0729 18:50:37.710861 1081729 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156-m04/id_rsa Username:docker}
	W0729 18:50:56.119119 1081729 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.9:22: connect: no route to host
	W0729 18:50:56.119224 1081729 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	E0729 18:50:56.119241 1081729 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host
	I0729 18:50:56.119249 1081729 status.go:257] ha-344156-m04 status: &{Name:ha-344156-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 18:50:56.119265 1081729 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.9:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344156 -n ha-344156
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344156 logs -n 25: (1.651365397s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m04 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp testdata/cp-test.txt                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156:/home/docker/cp-test_ha-344156-m04_ha-344156.txt                      |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156 sudo cat                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156.txt                                |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m02:/home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m02 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m03:/home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n                                                                | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | ha-344156-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-344156 ssh -n ha-344156-m03 sudo cat                                         | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC | 29 Jul 24 18:38 UTC |
	|         | /home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-344156 node stop m02 -v=7                                                    | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:38 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-344156 node start m02 -v=7                                                   | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:41 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-344156 -v=7                                                          | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-344156 -v=7                                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:42 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-344156 --wait=true -v=7                                                   | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:44 UTC | 29 Jul 24 18:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-344156                                                               | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:48 UTC |                     |
	| node    | ha-344156 node delete m03 -v=7                                                  | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:48 UTC | 29 Jul 24 18:48 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-344156 stop -v=7                                                             | ha-344156 | jenkins | v1.33.1 | 29 Jul 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:44:02
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:44:02.730210 1079484 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:44:02.730328 1079484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:02.730335 1079484 out.go:304] Setting ErrFile to fd 2...
	I0729 18:44:02.730339 1079484 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:44:02.730502 1079484 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:44:02.731198 1079484 out.go:298] Setting JSON to false
	I0729 18:44:02.732411 1079484 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8795,"bootTime":1722269848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:44:02.732489 1079484 start.go:139] virtualization: kvm guest
	I0729 18:44:02.738951 1079484 out.go:177] * [ha-344156] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:44:02.740617 1079484 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:44:02.740629 1079484 notify.go:220] Checking for updates...
	I0729 18:44:02.743462 1079484 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:44:02.744986 1079484 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:44:02.746656 1079484 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:44:02.747949 1079484 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:44:02.749256 1079484 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:44:02.751029 1079484 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:02.751142 1079484 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:44:02.751618 1079484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:02.751697 1079484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:02.767922 1079484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0729 18:44:02.768343 1079484 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:02.769013 1079484 main.go:141] libmachine: Using API Version  1
	I0729 18:44:02.769040 1079484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:02.769487 1079484 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:02.769739 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.806043 1079484 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:44:02.807263 1079484 start.go:297] selected driver: kvm2
	I0729 18:44:02.807287 1079484 start.go:901] validating driver "kvm2" against &{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:44:02.807481 1079484 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:44:02.807926 1079484 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:02.808031 1079484 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:44:02.822302 1079484 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:44:02.823019 1079484 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 18:44:02.823078 1079484 cni.go:84] Creating CNI manager for ""
	I0729 18:44:02.823090 1079484 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:44:02.823154 1079484 start.go:340] cluster config:
	{Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:44:02.823275 1079484 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:44:02.825024 1079484 out.go:177] * Starting "ha-344156" primary control-plane node in "ha-344156" cluster
	I0729 18:44:02.826235 1079484 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:44:02.826262 1079484 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 18:44:02.826273 1079484 cache.go:56] Caching tarball of preloaded images
	I0729 18:44:02.826359 1079484 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 18:44:02.826373 1079484 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 18:44:02.826506 1079484 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/config.json ...
	I0729 18:44:02.826709 1079484 start.go:360] acquireMachinesLock for ha-344156: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 18:44:02.826765 1079484 start.go:364] duration metric: took 36.325µs to acquireMachinesLock for "ha-344156"
	I0729 18:44:02.826785 1079484 start.go:96] Skipping create...Using existing machine configuration
	I0729 18:44:02.826794 1079484 fix.go:54] fixHost starting: 
	I0729 18:44:02.827101 1079484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:44:02.827142 1079484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:44:02.841119 1079484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
	I0729 18:44:02.841530 1079484 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:44:02.842007 1079484 main.go:141] libmachine: Using API Version  1
	I0729 18:44:02.842032 1079484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:44:02.842362 1079484 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:44:02.842597 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.842743 1079484 main.go:141] libmachine: (ha-344156) Calling .GetState
	I0729 18:44:02.844155 1079484 fix.go:112] recreateIfNeeded on ha-344156: state=Running err=<nil>
	W0729 18:44:02.844177 1079484 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 18:44:02.846110 1079484 out.go:177] * Updating the running kvm2 "ha-344156" VM ...
	I0729 18:44:02.847243 1079484 machine.go:94] provisionDockerMachine start ...
	I0729 18:44:02.847267 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:44:02.847467 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:02.849542 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.849904 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:02.849939 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.850037 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:02.850206 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.850356 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.850484 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:02.850645 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.850832 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:02.850857 1079484 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 18:44:02.968565 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:44:02.968610 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:02.968876 1079484 buildroot.go:166] provisioning hostname "ha-344156"
	I0729 18:44:02.968907 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:02.969130 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:02.971906 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.972296 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:02.972339 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:02.972493 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:02.972700 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.972852 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:02.972987 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:02.973151 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:02.973315 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:02.973327 1079484 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344156 && echo "ha-344156" | sudo tee /etc/hostname
	I0729 18:44:03.104854 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344156
	
	I0729 18:44:03.104892 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.107749 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.108111 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.108135 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.108295 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.108487 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.108654 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.108788 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.108948 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:03.109138 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:03.109156 1079484 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344156' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344156/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344156' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 18:44:03.227589 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 18:44:03.227626 1079484 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 18:44:03.227675 1079484 buildroot.go:174] setting up certificates
	I0729 18:44:03.227690 1079484 provision.go:84] configureAuth start
	I0729 18:44:03.227705 1079484 main.go:141] libmachine: (ha-344156) Calling .GetMachineName
	I0729 18:44:03.228005 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:44:03.230584 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.231128 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.231154 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.231327 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.233560 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.233940 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.233964 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.234141 1079484 provision.go:143] copyHostCerts
	I0729 18:44:03.234185 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:44:03.234223 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 18:44:03.234237 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 18:44:03.234302 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 18:44:03.234391 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:44:03.234409 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 18:44:03.234413 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 18:44:03.234437 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 18:44:03.234494 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:44:03.234509 1079484 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 18:44:03.234513 1079484 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 18:44:03.234533 1079484 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 18:44:03.234594 1079484 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.ha-344156 san=[127.0.0.1 192.168.39.225 ha-344156 localhost minikube]
	I0729 18:44:03.426259 1079484 provision.go:177] copyRemoteCerts
	I0729 18:44:03.426392 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 18:44:03.426466 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.429164 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.429601 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.429633 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.429797 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.429986 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.430171 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.430318 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:44:03.517181 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 18:44:03.517254 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 18:44:03.544507 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 18:44:03.544603 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0729 18:44:03.569730 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 18:44:03.569807 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 18:44:03.594135 1079484 provision.go:87] duration metric: took 366.429217ms to configureAuth
	I0729 18:44:03.594162 1079484 buildroot.go:189] setting minikube options for container-runtime
	I0729 18:44:03.594389 1079484 config.go:182] Loaded profile config "ha-344156": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:44:03.594471 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:44:03.597059 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.597396 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:44:03.597420 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:44:03.597611 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:44:03.597810 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.597997 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:44:03.598122 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:44:03.598271 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:44:03.598437 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:44:03.598452 1079484 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 18:45:34.484924 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 18:45:34.484957 1079484 machine.go:97] duration metric: took 1m31.637697454s to provisionDockerMachine
	I0729 18:45:34.484978 1079484 start.go:293] postStartSetup for "ha-344156" (driver="kvm2")
	I0729 18:45:34.484997 1079484 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 18:45:34.485022 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.485421 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 18:45:34.485451 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.489040 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.489511 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.489533 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.489724 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.489951 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.490133 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.490297 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.577117 1079484 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 18:45:34.581256 1079484 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 18:45:34.581284 1079484 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 18:45:34.581357 1079484 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 18:45:34.581454 1079484 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 18:45:34.581465 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 18:45:34.581576 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 18:45:34.590639 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:45:34.614284 1079484 start.go:296] duration metric: took 129.292444ms for postStartSetup
	I0729 18:45:34.614330 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.614641 1079484 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 18:45:34.614672 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.617442 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.617867 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.617895 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.618003 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.618178 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.618363 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.618532 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	W0729 18:45:34.704442 1079484 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 18:45:34.704473 1079484 fix.go:56] duration metric: took 1m31.877678231s for fixHost
	I0729 18:45:34.704498 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.707218 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.707659 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.707694 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.707845 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.708054 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.708224 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.708331 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.708539 1079484 main.go:141] libmachine: Using SSH client type: native
	I0729 18:45:34.708733 1079484 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I0729 18:45:34.708743 1079484 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 18:45:34.819719 1079484 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722278734.779887170
	
	I0729 18:45:34.819745 1079484 fix.go:216] guest clock: 1722278734.779887170
	I0729 18:45:34.819755 1079484 fix.go:229] Guest: 2024-07-29 18:45:34.77988717 +0000 UTC Remote: 2024-07-29 18:45:34.704481201 +0000 UTC m=+92.011386189 (delta=75.405969ms)
	I0729 18:45:34.819781 1079484 fix.go:200] guest clock delta is within tolerance: 75.405969ms
	I0729 18:45:34.819787 1079484 start.go:83] releasing machines lock for "ha-344156", held for 1m31.993010327s
	I0729 18:45:34.819822 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.820128 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:45:34.822964 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.823358 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.823386 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.823560 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824198 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824406 1079484 main.go:141] libmachine: (ha-344156) Calling .DriverName
	I0729 18:45:34.824503 1079484 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 18:45:34.824557 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.824682 1079484 ssh_runner.go:195] Run: cat /version.json
	I0729 18:45:34.824705 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHHostname
	I0729 18:45:34.827009 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827151 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827419 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.827448 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827555 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:34.827586 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:34.827620 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.827770 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHPort
	I0729 18:45:34.827832 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.827905 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHKeyPath
	I0729 18:45:34.827982 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.828068 1079484 main.go:141] libmachine: (ha-344156) Calling .GetSSHUsername
	I0729 18:45:34.828149 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.828222 1079484 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/ha-344156/id_rsa Username:docker}
	I0729 18:45:34.919667 1079484 ssh_runner.go:195] Run: systemctl --version
	I0729 18:45:34.941286 1079484 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 18:45:35.096516 1079484 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 18:45:35.106106 1079484 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 18:45:35.106176 1079484 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 18:45:35.115314 1079484 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 18:45:35.115334 1079484 start.go:495] detecting cgroup driver to use...
	I0729 18:45:35.115406 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 18:45:35.130639 1079484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 18:45:35.143931 1079484 docker.go:217] disabling cri-docker service (if available) ...
	I0729 18:45:35.143980 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 18:45:35.156758 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 18:45:35.169370 1079484 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 18:45:35.315720 1079484 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 18:45:35.459648 1079484 docker.go:233] disabling docker service ...
	I0729 18:45:35.459741 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 18:45:35.479249 1079484 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 18:45:35.494274 1079484 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 18:45:35.665432 1079484 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 18:45:35.808594 1079484 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 18:45:35.821936 1079484 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 18:45:35.840553 1079484 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 18:45:35.840612 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.850571 1079484 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 18:45:35.850632 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.860351 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.869812 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.879445 1079484 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 18:45:35.889712 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.899605 1079484 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.910758 1079484 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 18:45:35.920842 1079484 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 18:45:35.930022 1079484 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 18:45:35.938635 1079484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:45:36.076650 1079484 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 18:45:44.138821 1079484 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.062129091s)
	I0729 18:45:44.138863 1079484 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 18:45:44.138918 1079484 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 18:45:44.144747 1079484 start.go:563] Will wait 60s for crictl version
	I0729 18:45:44.144824 1079484 ssh_runner.go:195] Run: which crictl
	I0729 18:45:44.148659 1079484 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 18:45:44.190278 1079484 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 18:45:44.190358 1079484 ssh_runner.go:195] Run: crio --version
	I0729 18:45:44.217574 1079484 ssh_runner.go:195] Run: crio --version
	I0729 18:45:44.245365 1079484 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 18:45:44.246524 1079484 main.go:141] libmachine: (ha-344156) Calling .GetIP
	I0729 18:45:44.249231 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:44.249661 1079484 main.go:141] libmachine: (ha-344156) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:fc:98", ip: ""} in network mk-ha-344156: {Iface:virbr1 ExpiryTime:2024-07-29 19:33:59 +0000 UTC Type:0 Mac:52:54:00:a1:fc:98 Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:ha-344156 Clientid:01:52:54:00:a1:fc:98}
	I0729 18:45:44.249689 1079484 main.go:141] libmachine: (ha-344156) DBG | domain ha-344156 has defined IP address 192.168.39.225 and MAC address 52:54:00:a1:fc:98 in network mk-ha-344156
	I0729 18:45:44.249872 1079484 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 18:45:44.254299 1079484 kubeadm.go:883] updating cluster {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 18:45:44.254450 1079484 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 18:45:44.254512 1079484 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:45:44.295995 1079484 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:45:44.296020 1079484 crio.go:433] Images already preloaded, skipping extraction
	I0729 18:45:44.296074 1079484 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 18:45:44.328505 1079484 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 18:45:44.328532 1079484 cache_images.go:84] Images are preloaded, skipping loading
	I0729 18:45:44.328542 1079484 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.30.3 crio true true} ...
	I0729 18:45:44.328668 1079484 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344156 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 18:45:44.328734 1079484 ssh_runner.go:195] Run: crio config
	I0729 18:45:44.379175 1079484 cni.go:84] Creating CNI manager for ""
	I0729 18:45:44.379198 1079484 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 18:45:44.379211 1079484 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 18:45:44.379242 1079484 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344156 NodeName:ha-344156 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 18:45:44.379437 1079484 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344156"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 18:45:44.379468 1079484 kube-vip.go:115] generating kube-vip config ...
	I0729 18:45:44.379519 1079484 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 18:45:44.391092 1079484 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 18:45:44.391209 1079484 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 18:45:44.391274 1079484 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 18:45:44.400836 1079484 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 18:45:44.400889 1079484 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 18:45:44.410102 1079484 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 18:45:44.426310 1079484 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 18:45:44.443334 1079484 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 18:45:44.459219 1079484 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 18:45:44.476061 1079484 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 18:45:44.479998 1079484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 18:45:44.619940 1079484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 18:45:44.636423 1079484 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156 for IP: 192.168.39.225
	I0729 18:45:44.636452 1079484 certs.go:194] generating shared ca certs ...
	I0729 18:45:44.636475 1079484 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.636682 1079484 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 18:45:44.636735 1079484 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 18:45:44.636745 1079484 certs.go:256] generating profile certs ...
	I0729 18:45:44.636830 1079484 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/client.key
	I0729 18:45:44.636857 1079484 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63
	I0729 18:45:44.636870 1079484 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225 192.168.39.249 192.168.39.148 192.168.39.254]
	I0729 18:45:44.780083 1079484 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 ...
	I0729 18:45:44.780116 1079484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63: {Name:mk667ece8e3d7b1d838f39c6e3f4cf7c263fa8a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.780287 1079484 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63 ...
	I0729 18:45:44.780300 1079484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63: {Name:mka73a374c6e3b586fcc88c17fa9989a2541ed90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 18:45:44.780367 1079484 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt.35154a63 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt
	I0729 18:45:44.780523 1079484 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key.35154a63 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key
	I0729 18:45:44.780665 1079484 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key
	I0729 18:45:44.780682 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 18:45:44.780696 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 18:45:44.780709 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 18:45:44.780724 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 18:45:44.780741 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 18:45:44.780757 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 18:45:44.780770 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 18:45:44.780782 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 18:45:44.780833 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 18:45:44.780860 1079484 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 18:45:44.780870 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 18:45:44.780892 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 18:45:44.780913 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 18:45:44.780937 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 18:45:44.780973 1079484 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 18:45:44.781000 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 18:45:44.781013 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:44.781026 1079484 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 18:45:44.781719 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 18:45:44.808742 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 18:45:44.832565 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 18:45:44.856036 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 18:45:44.878998 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 18:45:44.901105 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 18:45:44.923358 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 18:45:44.946114 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/ha-344156/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 18:45:44.968890 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 18:45:44.990970 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 18:45:45.013280 1079484 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 18:45:45.035914 1079484 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 18:45:45.052097 1079484 ssh_runner.go:195] Run: openssl version
	I0729 18:45:45.057840 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 18:45:45.068700 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.073030 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.073072 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 18:45:45.078746 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 18:45:45.088661 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 18:45:45.099564 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.103791 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.103830 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 18:45:45.109555 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 18:45:45.119249 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 18:45:45.131709 1079484 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.136322 1079484 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.136374 1079484 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 18:45:45.142581 1079484 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 18:45:45.152839 1079484 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 18:45:45.157360 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 18:45:45.163025 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 18:45:45.168573 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 18:45:45.174108 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 18:45:45.180107 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 18:45:45.185753 1079484 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 18:45:45.191586 1079484 kubeadm.go:392] StartCluster: {Name:ha-344156 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344156 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.148 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.9 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:45:45.191733 1079484 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 18:45:45.191780 1079484 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 18:45:45.231326 1079484 cri.go:89] found id: "101cd31cb21fc963b197637a168589c1b941eb41979113dd3fb0f23cbfcb7d4f"
	I0729 18:45:45.231354 1079484 cri.go:89] found id: "ed53860c346f8c8f181a4e566342b169097f3d645e4e5dbc9162454b50b78e1b"
	I0729 18:45:45.231359 1079484 cri.go:89] found id: "fb1d68de4a07e66be33374c8c90edb7a386f4fb65e96c9bdb56e9fd90a9b4adc"
	I0729 18:45:45.231363 1079484 cri.go:89] found id: "ce856c69ecf84e714da35cd579fd1fe8602ffe85be37c3fcb4703a31b2cb6d6d"
	I0729 18:45:45.231365 1079484 cri.go:89] found id: "1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67"
	I0729 18:45:45.231368 1079484 cri.go:89] found id: "7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6"
	I0729 18:45:45.231370 1079484 cri.go:89] found id: "88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62"
	I0729 18:45:45.231373 1079484 cri.go:89] found id: "ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b"
	I0729 18:45:45.231375 1079484 cri.go:89] found id: "df682abbd97678618dabe8275a57ffd1f327de1e734e117a59fd4f520eaf1b79"
	I0729 18:45:45.231384 1079484 cri.go:89] found id: "cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4"
	I0729 18:45:45.231386 1079484 cri.go:89] found id: "15f9d79f9c9682c7273de711cee53f9f833182ceb7abdd39bb612f44066ac6f4"
	I0729 18:45:45.231389 1079484 cri.go:89] found id: "fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29"
	I0729 18:45:45.231391 1079484 cri.go:89] found id: "24d097bf3e16a2c4b74c82ba78ce7e6eb19b3461d66b573a3d5ba23c5df6a472"
	I0729 18:45:45.231394 1079484 cri.go:89] found id: ""
	I0729 18:45:45.231444 1079484 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.732652814Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279056732631350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=898bbdb6-32d8-4ee6-b781-0b59ad7a35c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.733091436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d204840-36bf-4eec-973a-d2b6ef7d710a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.733172895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d204840-36bf-4eec-973a-d2b6ef7d710a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.733721708Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d204840-36bf-4eec-973a-d2b6ef7d710a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.780807374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6eb93ed8-7d12-4af5-8283-b7744d30c9b2 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.780878846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6eb93ed8-7d12-4af5-8283-b7744d30c9b2 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.781995486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=181ee8ed-f689-4610-a49c-9e1cf3300226 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.782702091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279056782675606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=181ee8ed-f689-4610-a49c-9e1cf3300226 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.783225200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2db33ab4-4090-46fd-8ccd-76bdbdd037dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.783323072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2db33ab4-4090-46fd-8ccd-76bdbdd037dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.783842829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2db33ab4-4090-46fd-8ccd-76bdbdd037dc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.829556504Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7b5e2a4-3b7f-499c-b3be-8c2cd56f0933 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.829627279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7b5e2a4-3b7f-499c-b3be-8c2cd56f0933 name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.830979827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6454199-6b4b-444f-929b-145625a66518 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.831642146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279056831620674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6454199-6b4b-444f-929b-145625a66518 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.832095995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d70a88fd-5f31-4f7c-9f87-4faea689a1fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.832147184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d70a88fd-5f31-4f7c-9f87-4faea689a1fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.832830599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d70a88fd-5f31-4f7c-9f87-4faea689a1fb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.888590992Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e90fc64-8b2d-4913-ade0-5dc87fa8df5d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.888661391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e90fc64-8b2d-4913-ade0-5dc87fa8df5d name=/runtime.v1.RuntimeService/Version
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.889787573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cedf9ddc-ab9c-4d1c-a2a7-ee9b2ee92b22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.890211459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722279056890188788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cedf9ddc-ab9c-4d1c-a2a7-ee9b2ee92b22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.890758642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1238d74-b4d4-4a9e-bc1d-1635e14dac05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.890818792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1238d74-b4d4-4a9e-bc1d-1635e14dac05 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 18:50:56 ha-344156 crio[3799]: time="2024-07-29 18:50:56.891206803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b174523a06ec7adddf15369e8baac68d361738d24e60facffb158390ec46bb62,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722278814117910084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722278797123070814,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722278789119967937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5fd2655106bc50f366df7fe1a0d26b8e18abf5336ff2b35fb0db7c271a905e6,PodSandboxId:e975cb200a028c2f553577bc4ca3dc54153a60bbc2a757ac823269d664485b82,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722278784422534758,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annotations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c025231c68b98f9cec2be5488e7a2415cb848aff6b3457f54b1bcf4bdcf02d2a,PodSandboxId:333832b2e557c626d095d70ce80398da1efe4cf65d01af5554293376334042b9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722278767938486162,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f125f3c55fdf22425c7e10df6c846062,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0,PodSandboxId:12494ac147e1d192c12098dcb21d6c6df9c76e580409d272aef0ef71d9a4906a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722278751552759210,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91,PodSandboxId:aa7e4dbfa154ae6d0f220755ba9d1789fa37b73e1e5658abe8f771e05d7855ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722278751244807236,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ea00f25-122f-4a18-9d69-3606cfddf4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 70731b68,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e,PodSandboxId:5459cf266e338752287e9df29b9fa0a3a25bf21a51d55f8eacbc87c0b472d01c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751315791833,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e,PodSandboxId:86a1e64fb3784cf94a78db0167520ad1df05b70bc881f599245c47f08728be6b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722278751229841705,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268,PodSandboxId:f37cc1a23ea4cbdc9e1e5a727bd4054ae477b5ef49a7189d7a0b6f23467727f2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722278751112866025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30243da5f1a98e23c72326dd278a562e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4,PodSandboxId:453b2b6892cf0b6ee26e984852e3523d3d145e86281a8e34066fd7906dfd7b39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722278751150875901,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kubernetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b,PodSandboxId:118dbdb3a468781a14c11f74f95a432103d5f52631d9fe537e936d9f30d1a68f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722278750906805185,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5,PodSandboxId:502681a3f7a5d6cf061874a7bc45a4f1fddedbe2905aa509986e6f64bde09e9f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722278750955360200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d61da37ea38b5727b
5710cdad0fc95fd,},Annotations:map[string]string{io.kubernetes.container.hash: c06782b3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf,PodSandboxId:d1126d0597b32811fe4cd57edea908284ab89359f93d1b3bd14a957a786fdd3e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722278750940053649,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d152449ddedd3a52cbbb9d3acfb3bf85c0e5fa9f81a0c0359f4148d4c603d783,PodSandboxId:98fcabecdf16c058b2c9b2d5b67a175d4427e2426d8c8ecad90fe5e7e61c7166,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722278222485055131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-9sbfq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f11563c5-3507-44f0-a103-1e8462494e13,},Annot
ations:map[string]string{io.kubernetes.container.hash: fb54a535,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67,PodSandboxId:331a36b1d7af6a03c1de960f2f92f9e567bb8d9a89fef7342712caae96969f2c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090682990467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-h5h7v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2b09553-dd59-44ab-a738-41e872defd34,},Annotations:map[string]string{io.kube
rnetes.container.hash: 59c68fb6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6,PodSandboxId:3bc8a1c2175a3fcdce5b369132d086e20e9843f84b0af2dec1acd2dc3f598cb2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722278090616145812,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5slmg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2aca93c-209e-48b6-a9a5-692bdf185129,},Annotations:map[string]string{io.kubernetes.container.hash: 48049156,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62,PodSandboxId:5312fee5fcd07548b5a87233879d29cd884fb0a7e49ffeffe66817b71a7b2ac9,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722278078648181661,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-84nqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4e18e53-1c72-440f-82b2-bd1b4306af12,},Annotations:map[string]string{io.kubernetes.container.hash: 16293ddd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b,PodSandboxId:f041673054c6d8c2cbbc857f62b73eafbb56f1089f1a1937ee91d2e3cdb89df9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722278076564436457,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gp282,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abf94303-b608-45b5-ae8b-9288be614a8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6e0cc5f5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4,PodSandboxId:ec39a320a672eea9866c1f830b546dc2e1fc8f0a3093acc13b1acd6b5d008317,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722278056834871013,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d17047d55559cfd90852a780672fb93,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29,PodSandboxId:5e0320966c0af472e5e166dc8244abd4707674553da0aef0c877b9db5c6b053c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722278056771768809,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344156,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67610b75999e06603675bc1a64d5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9376d5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1238d74-b4d4-4a9e-bc1d-1635e14dac05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b174523a06ec7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   aa7e4dbfa154a       storage-provisioner
	f5c331c2db87d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   f37cc1a23ea4c       kube-controller-manager-ha-344156
	249837d4bf048       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   502681a3f7a5d       kube-apiserver-ha-344156
	b5fd2655106bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   e975cb200a028       busybox-fc5497c4f-9sbfq
	c025231c68b98       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   333832b2e557c       kube-vip-ha-344156
	81184078df7be       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   12494ac147e1d       kube-proxy-gp282
	7a8271452e018       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   5459cf266e338       coredns-7db6d8ff4d-5slmg
	4260fb67ddc41       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   aa7e4dbfa154a       storage-provisioner
	ab40f6e9b301a       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   86a1e64fb3784       kindnet-84nqp
	a37174e321aa6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   453b2b6892cf0       coredns-7db6d8ff4d-h5h7v
	caa8059749c41       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   f37cc1a23ea4c       kube-controller-manager-ha-344156
	c373f53a9dbd4       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   502681a3f7a5d       kube-apiserver-ha-344156
	2297dbc5667b8       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   d1126d0597b32       kube-scheduler-ha-344156
	182486140ce87       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   118dbdb3a4687       etcd-ha-344156
	d152449ddedd3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   98fcabecdf16c       busybox-fc5497c4f-9sbfq
	1a4d13ace439f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   331a36b1d7af6       coredns-7db6d8ff4d-h5h7v
	7d0acef755a4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   3bc8a1c2175a3       coredns-7db6d8ff4d-5slmg
	88c61cb999665       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   5312fee5fcd07       kindnet-84nqp
	ea6501e2c6d48       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   f041673054c6d       kube-proxy-gp282
	cea7dd8ee7d18       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   ec39a320a672e       kube-scheduler-ha-344156
	fc27c145e7b72       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   5e0320966c0af       etcd-ha-344156
	
	
	==> coredns [1a4d13ace439ff6db0bd224c5959b2f1de0aca9190251438b96b230bd76dad67] <==
	[INFO] 10.244.1.2:41663 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000101756s
	[INFO] 10.244.2.2:42699 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000103084s
	[INFO] 10.244.2.2:43982 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096471s
	[INFO] 10.244.2.2:48234 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064109s
	[INFO] 10.244.2.2:58544 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000127531s
	[INFO] 10.244.2.2:43646 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097904s
	[INFO] 10.244.0.4:41454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00007042s
	[INFO] 10.244.1.2:56019 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130286s
	[INFO] 10.244.1.2:49552 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000419229s
	[INFO] 10.244.1.2:42570 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00019871s
	[INFO] 10.244.1.2:35841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085394s
	[INFO] 10.244.2.2:38179 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154252s
	[INFO] 10.244.2.2:54595 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095931s
	[INFO] 10.244.0.4:52521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102943s
	[INFO] 10.244.0.4:41421 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000122912s
	[INFO] 10.244.1.2:51311 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000262883s
	[INFO] 10.244.1.2:51083 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000108384s
	[INFO] 10.244.2.2:49034 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138814s
	[INFO] 10.244.2.2:33015 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000141033s
	[INFO] 10.244.2.2:33854 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124542s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7a8271452e01844131f009ae7b4d6a0628e58b94b2a87a9aeb2990efcb11191e] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[243480641]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:46:03.270) (total time: 12564ms):
	Trace[243480641]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer 12564ms (18:46:15.834)
	Trace[243480641]: [12.56422755s] [12.56422755s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:39140->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [7d0acef755a4a9cf64d3fa80a06a2fb7cd2c2f24d851c814a12dbfd69b8c8ae6] <==
	[INFO] 10.244.1.2:47729 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001573362s
	[INFO] 10.244.2.2:32959 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001674804s
	[INFO] 10.244.0.4:44607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137454s
	[INFO] 10.244.0.4:45474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003415625s
	[INFO] 10.244.0.4:42044 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000293336s
	[INFO] 10.244.0.4:42246 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000257435s
	[INFO] 10.244.1.2:53039 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001621784s
	[INFO] 10.244.1.2:47789 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179788s
	[INFO] 10.244.1.2:51271 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115306s
	[INFO] 10.244.1.2:60584 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160548s
	[INFO] 10.244.2.2:39080 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143675s
	[INFO] 10.244.2.2:57667 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001587169s
	[INFO] 10.244.2.2:36002 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000958528s
	[INFO] 10.244.0.4:46689 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001122s
	[INFO] 10.244.0.4:53528 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068803s
	[INFO] 10.244.0.4:58879 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007922s
	[INFO] 10.244.2.2:40671 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000165257s
	[INFO] 10.244.2.2:52385 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072909s
	[INFO] 10.244.0.4:40200 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000101268s
	[INFO] 10.244.0.4:60214 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092204s
	[INFO] 10.244.1.2:45394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209017s
	[INFO] 10.244.1.2:53252 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000072648s
	[INFO] 10.244.2.2:37567 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000168035s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a37174e321aa6d722fe66991eec4aa407c80ca27b8befba847d42c6c4bccd4a4] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2030692405]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 18:46:02.817) (total time: 10575ms):
	Trace[2030692405]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer 10575ms (18:46:13.392)
	Trace[2030692405]: [10.575095883s] [10.575095883s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39414->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39438->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:39438->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-344156
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T18_34_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:34:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:50:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:46:32 +0000   Mon, 29 Jul 2024 18:34:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    ha-344156
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 be7f4c1228de4ae58c65b2a0531270c4
	  System UUID:                be7f4c12-28de-4ae5-8c65-b2a0531270c4
	  Boot ID:                    14c798b1-a7f8-4045-a5cc-f99e886c885f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-9sbfq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-5slmg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-h5h7v             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-344156                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-84nqp                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-344156             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-344156    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-gp282                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-344156             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-344156                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m23s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-344156 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-344156 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-344156 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-344156 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Warning  ContainerGCFailed        5m34s (x2 over 6m34s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-344156 event: Registered Node ha-344156 in Controller
	
	
	Name:               ha-344156-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_35_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:35:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 18:47:17 +0000   Mon, 29 Jul 2024 18:46:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    ha-344156-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae271825042248168626e86031e0e80b
	  System UUID:                ae271825-0422-4816-8626-e86031e0e80b
	  Boot ID:                    175ac7df-0ca0-443e-95dd-097c6a227ea2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-np547                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-344156-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-b85cc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-344156-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-344156-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-4p5r9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-344156-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-344156-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-344156-m02 status is now: NodeNotReady
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-344156-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-344156-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m18s                  node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	  Normal  RegisteredNode           3m6s                   node-controller  Node ha-344156-m02 event: Registered Node ha-344156-m02 in Controller
	
	
	Name:               ha-344156-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344156-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=ha-344156
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T18_37_36_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 18:37:35 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344156-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 18:48:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:49:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:49:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:49:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 18:48:10 +0000   Mon, 29 Jul 2024 18:49:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.9
	  Hostname:    ha-344156-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd3c9a6740fc4ec3a7f2c8b9b2357693
	  System UUID:                cd3c9a67-40fc-4ec3-a7f2-c8b9b2357693
	  Boot ID:                    165c6764-1793-422a-825b-5056b2e78975
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hwshw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-c84jp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-qjzd6           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-344156-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m18s                  node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   NodeNotReady             3m38s                  node-controller  Node ha-344156-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m6s                   node-controller  Node ha-344156-m04 event: Registered Node ha-344156-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-344156-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-344156-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-344156-m04 has been rebooted, boot id: 165c6764-1793-422a-825b-5056b2e78975
	  Normal   NodeReady                2m47s                  kubelet          Node ha-344156-m04 status is now: NodeReady
	  Normal   NodeNotReady             103s                   node-controller  Node ha-344156-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.055622] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058895] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.187111] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.118732] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.257910] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +4.135704] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.319915] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.063539] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.051986] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.074788] kauditd_printk_skb: 79 callbacks suppressed
	[  +6.534370] kauditd_printk_skb: 18 callbacks suppressed
	[ +21.052219] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 18:35] kauditd_printk_skb: 24 callbacks suppressed
	[Jul29 18:42] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 18:45] systemd-fstab-generator[3716]: Ignoring "noauto" option for root device
	[  +0.144364] systemd-fstab-generator[3728]: Ignoring "noauto" option for root device
	[  +0.200136] systemd-fstab-generator[3743]: Ignoring "noauto" option for root device
	[  +0.148656] systemd-fstab-generator[3755]: Ignoring "noauto" option for root device
	[  +0.272619] systemd-fstab-generator[3783]: Ignoring "noauto" option for root device
	[  +8.529726] systemd-fstab-generator[3887]: Ignoring "noauto" option for root device
	[  +0.092707] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.969540] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 18:46] kauditd_printk_skb: 85 callbacks suppressed
	[  +9.062161] kauditd_printk_skb: 1 callbacks suppressed
	[ +16.299490] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [182486140ce8714c921be4d3bda2253429cb415d758f79aeb6b0ab42f631d68b] <==
	{"level":"info","ts":"2024-07-29T18:47:36.152592Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fb0a52f06b768c2d","to":"3a4672411638cebf","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T18:47:36.152674Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:47:36.165412Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:36.170634Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:36.180979Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.148:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T18:47:37.022649Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T18:47:37.025854Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3a4672411638cebf","rtt":"0s","error":"dial tcp 192.168.39.148:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T18:47:42.438931Z","caller":"traceutil/trace.go:171","msg":"trace[864364412] transaction","detail":"{read_only:false; response_revision:2613; number_of_response:1; }","duration":"103.23738ms","start":"2024-07-29T18:47:42.335668Z","end":"2024-07-29T18:47:42.438905Z","steps":["trace[864364412] 'process raft request'  (duration: 103.102916ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T18:48:23.205516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fb0a52f06b768c2d switched to configuration voters=(11344801214739951462 18089362045835578413)"}
	{"level":"info","ts":"2024-07-29T18:48:23.207636Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"5e6fe32ded71a517","local-member-id":"fb0a52f06b768c2d","removed-remote-peer-id":"3a4672411638cebf","removed-remote-peer-urls":["https://192.168.39.148:2380"]}
	{"level":"info","ts":"2024-07-29T18:48:23.207784Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.208174Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:48:23.208233Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.208701Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:48:23.208753Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:48:23.208825Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.208976Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T18:48:23.209034Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3a4672411638cebf","error":"failed to read 3a4672411638cebf on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T18:48:23.209061Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.209151Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T18:48:23.209194Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:48:23.20921Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:48:23.209223Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fb0a52f06b768c2d","removed-remote-peer-id":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.228658Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fb0a52f06b768c2d","remote-peer-id-stream-handler":"fb0a52f06b768c2d","remote-peer-id-from":"3a4672411638cebf"}
	{"level":"warn","ts":"2024-07-29T18:48:23.237906Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fb0a52f06b768c2d","remote-peer-id-stream-handler":"fb0a52f06b768c2d","remote-peer-id-from":"3a4672411638cebf"}
	
	
	==> etcd [fc27c145e7b72db405baaf295995d274d557ba7dbce383424c6297461d859b29] <==
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.745328Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:44:02.719675Z","time spent":"1.025591002s","remote":"127.0.0.1:33116","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/limitranges/\" range_end:\"/registry/limitranges0\" limit:10000 "}
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.745654Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T18:44:02.721757Z","time spent":"1.023884919s","remote":"127.0.0.1:33186","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":0,"response size":0,"request content":"key:\"/registry/controllers/\" range_end:\"/registry/controllers0\" limit:10000 "}
	2024/07/29 18:44:03 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T18:44:03.787733Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.225:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T18:44:03.787782Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.225:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T18:44:03.787844Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fb0a52f06b768c2d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T18:44:03.788034Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788069Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788096Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788226Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788328Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.78841Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788443Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"9d70d498f3feaf66"}
	{"level":"info","ts":"2024-07-29T18:44:03.788451Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788463Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788481Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788551Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788599Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788691Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fb0a52f06b768c2d","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.788728Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3a4672411638cebf"}
	{"level":"info","ts":"2024-07-29T18:44:03.792232Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.225:2380"}
	{"level":"info","ts":"2024-07-29T18:44:03.792423Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.225:2380"}
	{"level":"info","ts":"2024-07-29T18:44:03.792435Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-344156","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.225:2380"],"advertise-client-urls":["https://192.168.39.225:2379"]}
	
	
	==> kernel <==
	 18:50:57 up 17 min,  0 users,  load average: 0.04, 0.22, 0.23
	Linux ha-344156 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [88c61cb99966582064c98436dabbb6247148296145067505f732961e9dafcf62] <==
	I0729 18:43:39.798953       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:39.799133       1 main.go:299] handling current node
	I0729 18:43:39.799235       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:39.799268       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:39.799558       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:39.799634       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:39.799739       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:39.799761       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:49.798944       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:49.799004       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:49.799200       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:49.799227       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:49.799345       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:49.799353       1 main.go:299] handling current node
	I0729 18:43:49.799367       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:49.799371       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:59.798908       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:43:59.799081       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:43:59.799248       1 main.go:295] Handling node with IPs: map[192.168.39.148:{}]
	I0729 18:43:59.799271       1 main.go:322] Node ha-344156-m03 has CIDR [10.244.2.0/24] 
	I0729 18:43:59.799452       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:43:59.799474       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:43:59.799535       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:43:59.799554       1 main.go:299] handling current node
	E0729 18:44:01.728415       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [ab40f6e9b301a395b5cd5e94d8503edf8e224c2587be4fd2daf98a89374a7e9e] <==
	I0729 18:50:12.588875       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:50:22.581835       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:50:22.581884       1 main.go:299] handling current node
	I0729 18:50:22.581900       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:50:22.581905       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:50:22.582043       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:50:22.582067       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:50:32.582900       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:50:32.582962       1 main.go:299] handling current node
	I0729 18:50:32.582981       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:50:32.582987       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:50:32.583162       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:50:32.583186       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:50:42.587408       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:50:42.587506       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	I0729 18:50:42.587657       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:50:42.587697       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:50:42.587831       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:50:42.587876       1 main.go:299] handling current node
	I0729 18:50:52.579476       1 main.go:295] Handling node with IPs: map[192.168.39.9:{}]
	I0729 18:50:52.579525       1 main.go:322] Node ha-344156-m04 has CIDR [10.244.3.0/24] 
	I0729 18:50:52.579685       1 main.go:295] Handling node with IPs: map[192.168.39.225:{}]
	I0729 18:50:52.579694       1 main.go:299] handling current node
	I0729 18:50:52.579704       1 main.go:295] Handling node with IPs: map[192.168.39.249:{}]
	I0729 18:50:52.579708       1 main.go:322] Node ha-344156-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [249837d4bf0487b8ddac24a5d86c9a901eb6e862bf649d5aded365f82343bb0b] <==
	I0729 18:46:30.910893       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 18:46:30.910902       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0729 18:46:30.911738       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0729 18:46:31.008751       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 18:46:31.008792       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 18:46:31.008797       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 18:46:31.010430       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 18:46:31.016677       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0729 18:46:31.020502       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.249]
	I0729 18:46:31.025738       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 18:46:31.033679       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 18:46:31.033721       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 18:46:31.033736       1 aggregator.go:165] initial CRD sync complete...
	I0729 18:46:31.033752       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 18:46:31.033775       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 18:46:31.033780       1 cache.go:39] Caches are synced for autoregister controller
	I0729 18:46:31.039499       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 18:46:31.042789       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:46:31.042824       1 policy_source.go:224] refreshing policies
	I0729 18:46:31.081128       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 18:46:31.121850       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 18:46:31.129054       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 18:46:31.131990       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 18:46:31.913823       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 18:46:32.248614       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.148 192.168.39.225 192.168.39.249]
	
	
	==> kube-apiserver [c373f53a9dbd411fe323c9a8fb32f348b83f82f89bc8fb682d325a34826437b5] <==
	I0729 18:45:51.731876       1 options.go:221] external host was not specified, using 192.168.39.225
	I0729 18:45:51.735046       1 server.go:148] Version: v1.30.3
	I0729 18:45:51.735738       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:45:52.362436       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 18:45:52.363034       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 18:45:52.380160       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 18:45:52.383497       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 18:45:52.383851       1 instance.go:299] Using reconciler: lease
	W0729 18:46:12.357533       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0729 18:46:12.357268       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0729 18:46:12.385469       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [caa8059749c41b7868cf8f0b61f0356539508f2667cbe7bbfae679c18cd89268] <==
	I0729 18:45:52.681242       1 serving.go:380] Generated self-signed cert in-memory
	I0729 18:45:52.956462       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 18:45:52.956499       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:45:52.960805       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 18:45:52.961852       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 18:45:52.961897       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 18:45:52.961924       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 18:46:13.391682       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.225:8443/healthz\": dial tcp 192.168.39.225:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f5c331c2db87d36569c1e2c3745280ae59411f376d0c6496945bdb87ec2513de] <==
	I0729 18:48:20.070143       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.171554ms"
	I0729 18:48:20.088016       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.742117ms"
	I0729 18:48:20.088246       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.797µs"
	I0729 18:48:20.183904       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.47951ms"
	I0729 18:48:20.184112       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="68.931µs"
	I0729 18:48:20.192545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.583µs"
	I0729 18:48:21.729660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.152676ms"
	I0729 18:48:21.729842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.991µs"
	I0729 18:48:22.001127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.59µs"
	I0729 18:48:22.497446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="191.336µs"
	I0729 18:48:22.523042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.728µs"
	I0729 18:48:22.530336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="87.27µs"
	I0729 18:48:34.658593       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344156-m04"
	E0729 18:48:49.035055       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:48:49.035163       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:48:49.035189       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:48:49.035213       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:48:49.035238       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:49:09.035945       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:49:09.035986       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:49:09.035995       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:49:09.036000       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	E0729 18:49:09.036005       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344156-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344156-m03"
	I0729 18:49:14.108853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.3795ms"
	I0729 18:49:14.109160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.712µs"
	
	
	==> kube-proxy [81184078df7bea819e58580fd80c6cffb76960208cfdcf77e820b9597e999ba0] <==
	I0729 18:45:53.018265       1 server_linux.go:69] "Using iptables proxy"
	E0729 18:45:55.545867       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:45:58.618619       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:01.689888       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:07.834731       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 18:46:17.050197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344156\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 18:46:34.273438       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	I0729 18:46:34.310321       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 18:46:34.310417       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 18:46:34.310455       1 server_linux.go:165] "Using iptables Proxier"
	I0729 18:46:34.313224       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 18:46:34.313688       1 server.go:872] "Version info" version="v1.30.3"
	I0729 18:46:34.313770       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 18:46:34.315360       1 config.go:192] "Starting service config controller"
	I0729 18:46:34.315422       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 18:46:34.315463       1 config.go:101] "Starting endpoint slice config controller"
	I0729 18:46:34.315479       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 18:46:34.316227       1 config.go:319] "Starting node config controller"
	I0729 18:46:34.317231       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 18:46:34.416233       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 18:46:34.416341       1 shared_informer.go:320] Caches are synced for service config
	I0729 18:46:34.417654       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ea6501e2c6d48c68182f6d966404f0d58013e7ee6b2d05e6e8a8de079a01e50b] <==
	E0729 18:42:38.233826       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:38.233735       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:38.233929       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.761802       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761866       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.761900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:44.761711       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:44.762010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:54.297726       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:54.297861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:54.297964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:54.298072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:42:57.370400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:42:57.370505       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:18.875032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:18.875207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:18.875032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:18.875369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:21.946583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:21.947480       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344156&resourceVersion=2016": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:52.667403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:52.667666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2047": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 18:43:58.811328       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 18:43:58.811601       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2023": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [2297dbc5667b853e3a48d404f4b17f021af9cf0011a39175e36cf998b6fb2dcf] <==
	W0729 18:46:22.332729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.332842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:22.888508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.225:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:22.888648       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.225:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.070231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.225:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.070348       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.225:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.248131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.248219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:23.293085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.225:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:23.293146       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.225:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:27.924491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:27.924635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.225:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:28.477860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.225:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:28.477955       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.225:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:29.270641       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	E0729 18:46:29.270695       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.225:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.225:8443: connect: connection refused
	W0729 18:46:30.956603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:46:30.956669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 18:46:30.956758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:46:30.956793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0729 18:46:33.999626       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 18:48:19.908212       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hwshw\": pod busybox-fc5497c4f-hwshw is already assigned to node \"ha-344156-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-hwshw" node="ha-344156-m04"
	E0729 18:48:19.908654       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 5cab61d2-586b-4972-ba43-ac37fa5c2bca(default/busybox-fc5497c4f-hwshw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-hwshw"
	E0729 18:48:19.908742       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-hwshw\": pod busybox-fc5497c4f-hwshw is already assigned to node \"ha-344156-m04\"" pod="default/busybox-fc5497c4f-hwshw"
	I0729 18:48:19.908790       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-hwshw" node="ha-344156-m04"
	
	
	==> kube-scheduler [cea7dd8ee7d180192a5a6562a72a56f86a9a432553225602839d9657f42f95a4] <==
	W0729 18:43:56.982199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 18:43:56.982354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 18:43:57.221472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 18:43:57.221531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 18:43:57.908653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 18:43:57.908738       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 18:43:57.917575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:57.917660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 18:43:58.012160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 18:43:58.012354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 18:43:58.128650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 18:43:58.128736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 18:43:58.261255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 18:43:58.261360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 18:43:58.678215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 18:43:58.678382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 18:43:58.683554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:58.683666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 18:43:58.738508       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 18:43:58.738639       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 18:43:59.206732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 18:43:59.206870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 18:44:03.653475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:44:03.653525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 18:44:03.705192       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 18:46:42 ha-344156 kubelet[1368]: E0729 18:46:42.101862    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(3ea00f25-122f-4a18-9d69-3606cfddf4d9)\"" pod="kube-system/storage-provisioner" podUID="3ea00f25-122f-4a18-9d69-3606cfddf4d9"
	Jul 29 18:46:54 ha-344156 kubelet[1368]: I0729 18:46:54.101119    1368 scope.go:117] "RemoveContainer" containerID="4260fb67ddc41983a522e2691ad7642fca868ad3425cfe9b4ae67e7a346c8e91"
	Jul 29 18:47:23 ha-344156 kubelet[1368]: E0729 18:47:23.117784    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:47:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:47:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:47:35 ha-344156 kubelet[1368]: I0729 18:47:35.102036    1368 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344156" podUID="586052c5-c670-4957-b052-e2a7bf8bafb2"
	Jul 29 18:47:35 ha-344156 kubelet[1368]: I0729 18:47:35.134926    1368 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-344156"
	Jul 29 18:47:36 ha-344156 kubelet[1368]: I0729 18:47:36.030503    1368 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344156" podUID="586052c5-c670-4957-b052-e2a7bf8bafb2"
	Jul 29 18:48:23 ha-344156 kubelet[1368]: E0729 18:48:23.120382    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:48:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:48:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:48:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:48:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:49:23 ha-344156 kubelet[1368]: E0729 18:49:23.118448    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:49:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:49:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:49:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:49:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 18:50:23 ha-344156 kubelet[1368]: E0729 18:50:23.118858    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 18:50:23 ha-344156 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 18:50:23 ha-344156 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 18:50:23 ha-344156 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 18:50:23 ha-344156 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 18:50:56.438901 1081875 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344156 -n ha-344156
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344156 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370772
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-370772
E0729 19:08:00.968547 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-370772: exit status 82 (2m1.872766548s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-370772-m03"  ...
	* Stopping node "multinode-370772-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-370772" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370772 --wait=true -v=8 --alsologtostderr
E0729 19:10:34.134975 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 19:11:04.012250 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370772 --wait=true -v=8 --alsologtostderr: (3m24.422781031s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370772
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370772 -n multinode-370772
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-370772 logs -n 25: (1.413848716s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772:/home/docker/cp-test_multinode-370772-m02_multinode-370772.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772 sudo cat                                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m02_multinode-370772.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03:/home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772-m03 sudo cat                                   | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp testdata/cp-test.txt                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772:/home/docker/cp-test_multinode-370772-m03_multinode-370772.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772 sudo cat                                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m03_multinode-370772.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02:/home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772-m02 sudo cat                                   | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-370772 node stop m03                                                          | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	| node    | multinode-370772 node start                                                             | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-370772                                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	| stop    | -p multinode-370772                                                                     | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	| start   | -p multinode-370772                                                                     | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:08 UTC | 29 Jul 24 19:11 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-370772                                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:11 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:08:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:08:30.241287 1091282 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:08:30.241382 1091282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:08:30.241389 1091282 out.go:304] Setting ErrFile to fd 2...
	I0729 19:08:30.241393 1091282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:08:30.241591 1091282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:08:30.242109 1091282 out.go:298] Setting JSON to false
	I0729 19:08:30.243038 1091282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10262,"bootTime":1722269848,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:08:30.243095 1091282 start.go:139] virtualization: kvm guest
	I0729 19:08:30.245216 1091282 out.go:177] * [multinode-370772] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:08:30.246585 1091282 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:08:30.246648 1091282 notify.go:220] Checking for updates...
	I0729 19:08:30.248713 1091282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:08:30.249743 1091282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:08:30.250709 1091282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:08:30.251700 1091282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:08:30.252668 1091282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:08:30.254065 1091282 config.go:182] Loaded profile config "multinode-370772": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:08:30.254161 1091282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:08:30.254539 1091282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:08:30.254594 1091282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:08:30.269788 1091282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0729 19:08:30.270299 1091282 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:08:30.270974 1091282 main.go:141] libmachine: Using API Version  1
	I0729 19:08:30.271001 1091282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:08:30.271376 1091282 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:08:30.271583 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.306421 1091282 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:08:30.307527 1091282 start.go:297] selected driver: kvm2
	I0729 19:08:30.307539 1091282 start.go:901] validating driver "kvm2" against &{Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:08:30.307683 1091282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:08:30.308027 1091282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:08:30.308096 1091282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:08:30.322662 1091282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:08:30.323423 1091282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:08:30.323457 1091282 cni.go:84] Creating CNI manager for ""
	I0729 19:08:30.323463 1091282 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 19:08:30.323517 1091282 start.go:340] cluster config:
	{Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:08:30.323639 1091282 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:08:30.325025 1091282 out.go:177] * Starting "multinode-370772" primary control-plane node in "multinode-370772" cluster
	I0729 19:08:30.325909 1091282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:08:30.325940 1091282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:08:30.325949 1091282 cache.go:56] Caching tarball of preloaded images
	I0729 19:08:30.326025 1091282 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:08:30.326044 1091282 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:08:30.326155 1091282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/config.json ...
	I0729 19:08:30.326327 1091282 start.go:360] acquireMachinesLock for multinode-370772: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:08:30.326364 1091282 start.go:364] duration metric: took 22.127µs to acquireMachinesLock for "multinode-370772"
	I0729 19:08:30.326378 1091282 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:08:30.326386 1091282 fix.go:54] fixHost starting: 
	I0729 19:08:30.326641 1091282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:08:30.326671 1091282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:08:30.340392 1091282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0729 19:08:30.340809 1091282 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:08:30.341252 1091282 main.go:141] libmachine: Using API Version  1
	I0729 19:08:30.341272 1091282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:08:30.341546 1091282 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:08:30.341688 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.341816 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetState
	I0729 19:08:30.343221 1091282 fix.go:112] recreateIfNeeded on multinode-370772: state=Running err=<nil>
	W0729 19:08:30.343243 1091282 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:08:30.345667 1091282 out.go:177] * Updating the running kvm2 "multinode-370772" VM ...
	I0729 19:08:30.346894 1091282 machine.go:94] provisionDockerMachine start ...
	I0729 19:08:30.346914 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.347133 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.349540 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.350038 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.350064 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.350185 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.350357 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.350498 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.350658 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.350833 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.351094 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.351111 1091282 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:08:30.463508 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370772
	
	I0729 19:08:30.463533 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.463777 1091282 buildroot.go:166] provisioning hostname "multinode-370772"
	I0729 19:08:30.463807 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.463969 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.466880 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.467521 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.467545 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.467706 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.467895 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.468050 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.468190 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.468313 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.468473 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.468485 1091282 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370772 && echo "multinode-370772" | sudo tee /etc/hostname
	I0729 19:08:30.593960 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370772
	
	I0729 19:08:30.593985 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.596639 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.596956 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.597000 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.597182 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.597387 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.597617 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.597779 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.597958 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.598156 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.598179 1091282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370772' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370772/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370772' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:08:30.711614 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:08:30.711650 1091282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:08:30.711709 1091282 buildroot.go:174] setting up certificates
	I0729 19:08:30.711739 1091282 provision.go:84] configureAuth start
	I0729 19:08:30.711759 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.712039 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:08:30.714542 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.714972 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.714997 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.715161 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.717196 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.717509 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.717549 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.717655 1091282 provision.go:143] copyHostCerts
	I0729 19:08:30.717698 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:08:30.717745 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:08:30.717762 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:08:30.717836 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:08:30.717947 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:08:30.717972 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:08:30.717979 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:08:30.718021 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:08:30.718104 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:08:30.718127 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:08:30.718134 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:08:30.718169 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:08:30.718249 1091282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.multinode-370772 san=[127.0.0.1 192.168.39.180 localhost minikube multinode-370772]
	I0729 19:08:30.941171 1091282 provision.go:177] copyRemoteCerts
	I0729 19:08:30.941238 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:08:30.941269 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.943844 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.944166 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.944192 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.944388 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.944569 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.944756 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.944902 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:08:31.028288 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 19:08:31.028355 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 19:08:31.053810 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 19:08:31.053864 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:08:31.076555 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 19:08:31.076612 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:08:31.099283 1091282 provision.go:87] duration metric: took 387.527287ms to configureAuth
	I0729 19:08:31.099315 1091282 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:08:31.099541 1091282 config.go:182] Loaded profile config "multinode-370772": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:08:31.099614 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:31.102119 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:31.102490 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:31.102518 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:31.102667 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:31.102897 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:31.103072 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:31.103218 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:31.103370 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:31.103531 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:31.103544 1091282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:10:01.843905 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:10:01.843952 1091282 machine.go:97] duration metric: took 1m31.497040064s to provisionDockerMachine
	I0729 19:10:01.843973 1091282 start.go:293] postStartSetup for "multinode-370772" (driver="kvm2")
	I0729 19:10:01.843988 1091282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:10:01.844014 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:01.844451 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:10:01.844490 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:01.847610 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.848026 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:01.848066 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.848289 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:01.848491 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.848645 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:01.848758 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:01.935310 1091282 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:10:01.939699 1091282 command_runner.go:130] > NAME=Buildroot
	I0729 19:10:01.939721 1091282 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 19:10:01.939725 1091282 command_runner.go:130] > ID=buildroot
	I0729 19:10:01.939729 1091282 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 19:10:01.939734 1091282 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 19:10:01.939864 1091282 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:10:01.939893 1091282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:10:01.939953 1091282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:10:01.940029 1091282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:10:01.940040 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 19:10:01.940163 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:10:01.949916 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:10:01.975645 1091282 start.go:296] duration metric: took 131.654111ms for postStartSetup
	I0729 19:10:01.975695 1091282 fix.go:56] duration metric: took 1m31.649308965s for fixHost
	I0729 19:10:01.975719 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:01.978504 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.978887 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:01.978925 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.979032 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:01.979249 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.979435 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.979611 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:01.979765 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:10:01.979948 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:10:01.979959 1091282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:10:02.096857 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280202.073473206
	
	I0729 19:10:02.096893 1091282 fix.go:216] guest clock: 1722280202.073473206
	I0729 19:10:02.096900 1091282 fix.go:229] Guest: 2024-07-29 19:10:02.073473206 +0000 UTC Remote: 2024-07-29 19:10:01.975700043 +0000 UTC m=+91.769380968 (delta=97.773163ms)
	I0729 19:10:02.096948 1091282 fix.go:200] guest clock delta is within tolerance: 97.773163ms
	I0729 19:10:02.096958 1091282 start.go:83] releasing machines lock for "multinode-370772", held for 1m31.770584081s
	I0729 19:10:02.096983 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.097300 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:10:02.099811 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.100138 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.100172 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.100304 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.100880 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.101050 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.101172 1091282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:10:02.101231 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:02.101252 1091282 ssh_runner.go:195] Run: cat /version.json
	I0729 19:10:02.101277 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:02.103702 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104007 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.104065 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104088 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104234 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:02.104427 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:02.104507 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.104530 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104587 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:02.104700 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:02.104812 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:02.104910 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:02.105043 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:02.105199 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:02.204632 1091282 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 19:10:02.204685 1091282 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 19:10:02.204811 1091282 ssh_runner.go:195] Run: systemctl --version
	I0729 19:10:02.210543 1091282 command_runner.go:130] > systemd 252 (252)
	I0729 19:10:02.210582 1091282 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 19:10:02.210823 1091282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:10:02.368790 1091282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 19:10:02.377127 1091282 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 19:10:02.377275 1091282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:10:02.377340 1091282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:10:02.387561 1091282 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 19:10:02.387591 1091282 start.go:495] detecting cgroup driver to use...
	I0729 19:10:02.387686 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:10:02.407264 1091282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:10:02.421723 1091282 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:10:02.421784 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:10:02.436673 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:10:02.451028 1091282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:10:02.608282 1091282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:10:02.768607 1091282 docker.go:233] disabling docker service ...
	I0729 19:10:02.768686 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:10:02.789405 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:10:02.804259 1091282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:10:02.957069 1091282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:10:03.112477 1091282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:10:03.131536 1091282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:10:03.152169 1091282 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 19:10:03.152499 1091282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:10:03.152566 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.163932 1091282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:10:03.164024 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.175642 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.186453 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.197315 1091282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:10:03.211230 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.223794 1091282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.235154 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.247196 1091282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:10:03.257120 1091282 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 19:10:03.257199 1091282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:10:03.266840 1091282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:10:03.407938 1091282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:10:10.493783 1091282 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.085804296s)
	I0729 19:10:10.493820 1091282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:10:10.493868 1091282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:10:10.499199 1091282 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 19:10:10.499220 1091282 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 19:10:10.499245 1091282 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0729 19:10:10.499257 1091282 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 19:10:10.499262 1091282 command_runner.go:130] > Access: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499268 1091282 command_runner.go:130] > Modify: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499273 1091282 command_runner.go:130] > Change: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499276 1091282 command_runner.go:130] >  Birth: -
	I0729 19:10:10.499296 1091282 start.go:563] Will wait 60s for crictl version
	I0729 19:10:10.499348 1091282 ssh_runner.go:195] Run: which crictl
	I0729 19:10:10.503038 1091282 command_runner.go:130] > /usr/bin/crictl
	I0729 19:10:10.503111 1091282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:10:10.542894 1091282 command_runner.go:130] > Version:  0.1.0
	I0729 19:10:10.542915 1091282 command_runner.go:130] > RuntimeName:  cri-o
	I0729 19:10:10.542920 1091282 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 19:10:10.542926 1091282 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 19:10:10.543934 1091282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:10:10.544014 1091282 ssh_runner.go:195] Run: crio --version
	I0729 19:10:10.574191 1091282 command_runner.go:130] > crio version 1.29.1
	I0729 19:10:10.574214 1091282 command_runner.go:130] > Version:        1.29.1
	I0729 19:10:10.574221 1091282 command_runner.go:130] > GitCommit:      unknown
	I0729 19:10:10.574225 1091282 command_runner.go:130] > GitCommitDate:  unknown
	I0729 19:10:10.574230 1091282 command_runner.go:130] > GitTreeState:   clean
	I0729 19:10:10.574235 1091282 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 19:10:10.574240 1091282 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 19:10:10.574244 1091282 command_runner.go:130] > Compiler:       gc
	I0729 19:10:10.574251 1091282 command_runner.go:130] > Platform:       linux/amd64
	I0729 19:10:10.574257 1091282 command_runner.go:130] > Linkmode:       dynamic
	I0729 19:10:10.574264 1091282 command_runner.go:130] > BuildTags:      
	I0729 19:10:10.574274 1091282 command_runner.go:130] >   containers_image_ostree_stub
	I0729 19:10:10.574280 1091282 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 19:10:10.574290 1091282 command_runner.go:130] >   btrfs_noversion
	I0729 19:10:10.574295 1091282 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 19:10:10.574299 1091282 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 19:10:10.574308 1091282 command_runner.go:130] >   seccomp
	I0729 19:10:10.574313 1091282 command_runner.go:130] > LDFlags:          unknown
	I0729 19:10:10.574320 1091282 command_runner.go:130] > SeccompEnabled:   true
	I0729 19:10:10.574323 1091282 command_runner.go:130] > AppArmorEnabled:  false
	I0729 19:10:10.575520 1091282 ssh_runner.go:195] Run: crio --version
	I0729 19:10:10.601753 1091282 command_runner.go:130] > crio version 1.29.1
	I0729 19:10:10.601777 1091282 command_runner.go:130] > Version:        1.29.1
	I0729 19:10:10.601783 1091282 command_runner.go:130] > GitCommit:      unknown
	I0729 19:10:10.601795 1091282 command_runner.go:130] > GitCommitDate:  unknown
	I0729 19:10:10.601799 1091282 command_runner.go:130] > GitTreeState:   clean
	I0729 19:10:10.601805 1091282 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 19:10:10.601811 1091282 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 19:10:10.601816 1091282 command_runner.go:130] > Compiler:       gc
	I0729 19:10:10.601823 1091282 command_runner.go:130] > Platform:       linux/amd64
	I0729 19:10:10.601829 1091282 command_runner.go:130] > Linkmode:       dynamic
	I0729 19:10:10.601840 1091282 command_runner.go:130] > BuildTags:      
	I0729 19:10:10.601847 1091282 command_runner.go:130] >   containers_image_ostree_stub
	I0729 19:10:10.601853 1091282 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 19:10:10.601863 1091282 command_runner.go:130] >   btrfs_noversion
	I0729 19:10:10.601869 1091282 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 19:10:10.601875 1091282 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 19:10:10.601879 1091282 command_runner.go:130] >   seccomp
	I0729 19:10:10.601883 1091282 command_runner.go:130] > LDFlags:          unknown
	I0729 19:10:10.601887 1091282 command_runner.go:130] > SeccompEnabled:   true
	I0729 19:10:10.601891 1091282 command_runner.go:130] > AppArmorEnabled:  false
	I0729 19:10:10.604715 1091282 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:10:10.605920 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:10:10.608528 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:10.608826 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:10.608847 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:10.609055 1091282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:10:10.613115 1091282 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 19:10:10.613229 1091282 kubeadm.go:883] updating cluster {Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:10:10.613367 1091282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:10:10.613410 1091282 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:10:10.659628 1091282 command_runner.go:130] > {
	I0729 19:10:10.659654 1091282 command_runner.go:130] >   "images": [
	I0729 19:10:10.659662 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659685 1091282 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 19:10:10.659692 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659701 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 19:10:10.659707 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659715 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.659728 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 19:10:10.659743 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 19:10:10.659751 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659757 1091282 command_runner.go:130] >       "size": "87165492",
	I0729 19:10:10.659767 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.659774 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.659790 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.659798 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.659807 1091282 command_runner.go:130] >     },
	I0729 19:10:10.659815 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659828 1091282 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 19:10:10.659837 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659848 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 19:10:10.659856 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659863 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.659877 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 19:10:10.659891 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 19:10:10.659899 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659908 1091282 command_runner.go:130] >       "size": "87174707",
	I0729 19:10:10.659917 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.659930 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.659938 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.659948 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.659957 1091282 command_runner.go:130] >     },
	I0729 19:10:10.659965 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659978 1091282 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 19:10:10.659986 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659994 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 19:10:10.659997 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660002 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660009 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 19:10:10.660026 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 19:10:10.660032 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660036 1091282 command_runner.go:130] >       "size": "1363676",
	I0729 19:10:10.660040 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660046 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660050 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660056 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660059 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660064 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660070 1091282 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 19:10:10.660076 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660081 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 19:10:10.660086 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660090 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660100 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 19:10:10.660115 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 19:10:10.660121 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660125 1091282 command_runner.go:130] >       "size": "31470524",
	I0729 19:10:10.660129 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660135 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660139 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660145 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660148 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660154 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660162 1091282 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 19:10:10.660168 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660174 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 19:10:10.660179 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660183 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660192 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 19:10:10.660201 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 19:10:10.660206 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660210 1091282 command_runner.go:130] >       "size": "61245718",
	I0729 19:10:10.660213 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660219 1091282 command_runner.go:130] >       "username": "nonroot",
	I0729 19:10:10.660223 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660234 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660240 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660243 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660251 1091282 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 19:10:10.660255 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660260 1091282 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 19:10:10.660265 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660273 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660282 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 19:10:10.660291 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 19:10:10.660297 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660301 1091282 command_runner.go:130] >       "size": "150779692",
	I0729 19:10:10.660307 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660312 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660317 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660321 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660327 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660330 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660335 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660339 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660347 1091282 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 19:10:10.660353 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660357 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 19:10:10.660363 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660366 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660375 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 19:10:10.660384 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 19:10:10.660389 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660393 1091282 command_runner.go:130] >       "size": "117609954",
	I0729 19:10:10.660399 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660402 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660408 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660411 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660416 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660420 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660425 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660433 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660441 1091282 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 19:10:10.660446 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660451 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 19:10:10.660456 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660460 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660482 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 19:10:10.660492 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 19:10:10.660497 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660502 1091282 command_runner.go:130] >       "size": "112198984",
	I0729 19:10:10.660507 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660511 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660514 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660518 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660523 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660526 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660529 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660532 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660537 1091282 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 19:10:10.660541 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660545 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 19:10:10.660548 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660552 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660561 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 19:10:10.660567 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 19:10:10.660571 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660577 1091282 command_runner.go:130] >       "size": "85953945",
	I0729 19:10:10.660582 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660588 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660593 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660599 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660603 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660607 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660616 1091282 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 19:10:10.660624 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660635 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 19:10:10.660652 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660674 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660708 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 19:10:10.660721 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 19:10:10.660727 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660731 1091282 command_runner.go:130] >       "size": "63051080",
	I0729 19:10:10.660737 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660741 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660747 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660751 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660757 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660761 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660766 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660770 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660776 1091282 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 19:10:10.660783 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660787 1091282 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 19:10:10.660793 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660797 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660805 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 19:10:10.660814 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 19:10:10.660819 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660824 1091282 command_runner.go:130] >       "size": "750414",
	I0729 19:10:10.660829 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660833 1091282 command_runner.go:130] >         "value": "65535"
	I0729 19:10:10.660838 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660842 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660848 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660852 1091282 command_runner.go:130] >       "pinned": true
	I0729 19:10:10.660855 1091282 command_runner.go:130] >     }
	I0729 19:10:10.660858 1091282 command_runner.go:130] >   ]
	I0729 19:10:10.660861 1091282 command_runner.go:130] > }
	I0729 19:10:10.661054 1091282 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:10:10.661067 1091282 crio.go:433] Images already preloaded, skipping extraction
	I0729 19:10:10.661118 1091282 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:10:10.695353 1091282 command_runner.go:130] > {
	I0729 19:10:10.695378 1091282 command_runner.go:130] >   "images": [
	I0729 19:10:10.695383 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695396 1091282 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 19:10:10.695403 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695413 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 19:10:10.695422 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695428 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695440 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 19:10:10.695450 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 19:10:10.695456 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695464 1091282 command_runner.go:130] >       "size": "87165492",
	I0729 19:10:10.695471 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695478 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695489 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695496 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695502 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695511 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695517 1091282 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 19:10:10.695520 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695525 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 19:10:10.695529 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695533 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695539 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 19:10:10.695546 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 19:10:10.695549 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695553 1091282 command_runner.go:130] >       "size": "87174707",
	I0729 19:10:10.695556 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695568 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695574 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695577 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695580 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695584 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695597 1091282 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 19:10:10.695603 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695607 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 19:10:10.695618 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695624 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695631 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 19:10:10.695640 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 19:10:10.695643 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695647 1091282 command_runner.go:130] >       "size": "1363676",
	I0729 19:10:10.695651 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695654 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695661 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695665 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695669 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695673 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695679 1091282 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 19:10:10.695688 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695694 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 19:10:10.695698 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695703 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695710 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 19:10:10.695726 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 19:10:10.695732 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695736 1091282 command_runner.go:130] >       "size": "31470524",
	I0729 19:10:10.695740 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695744 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695747 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695751 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695754 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695757 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695763 1091282 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 19:10:10.695768 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695773 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 19:10:10.695777 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695781 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695790 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 19:10:10.695799 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 19:10:10.695804 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695809 1091282 command_runner.go:130] >       "size": "61245718",
	I0729 19:10:10.695819 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695826 1091282 command_runner.go:130] >       "username": "nonroot",
	I0729 19:10:10.695829 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695836 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695842 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695845 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695853 1091282 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 19:10:10.695859 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695864 1091282 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 19:10:10.695869 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695873 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695882 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 19:10:10.695890 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 19:10:10.695896 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695900 1091282 command_runner.go:130] >       "size": "150779692",
	I0729 19:10:10.695907 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.695910 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.695919 1091282 command_runner.go:130] >       },
	I0729 19:10:10.695922 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695928 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695932 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695938 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695941 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695948 1091282 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 19:10:10.695952 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695959 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 19:10:10.695963 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695966 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695975 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 19:10:10.695984 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 19:10:10.695990 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695994 1091282 command_runner.go:130] >       "size": "117609954",
	I0729 19:10:10.696000 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696004 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696010 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696014 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696024 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696030 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696033 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696038 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696044 1091282 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 19:10:10.696050 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696054 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 19:10:10.696060 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696064 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696088 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 19:10:10.696098 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 19:10:10.696101 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696105 1091282 command_runner.go:130] >       "size": "112198984",
	I0729 19:10:10.696109 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696112 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696116 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696121 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696127 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696131 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696136 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696139 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696148 1091282 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 19:10:10.696152 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696156 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 19:10:10.696161 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696165 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696174 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 19:10:10.696185 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 19:10:10.696190 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696194 1091282 command_runner.go:130] >       "size": "85953945",
	I0729 19:10:10.696198 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.696203 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696207 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696212 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696216 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696221 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696232 1091282 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 19:10:10.696238 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696243 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 19:10:10.696248 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696253 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696262 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 19:10:10.696270 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 19:10:10.696276 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696279 1091282 command_runner.go:130] >       "size": "63051080",
	I0729 19:10:10.696283 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696289 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696293 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696299 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696302 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696308 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696312 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696317 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696323 1091282 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 19:10:10.696328 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696333 1091282 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 19:10:10.696339 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696343 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696351 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 19:10:10.696357 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 19:10:10.696363 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696366 1091282 command_runner.go:130] >       "size": "750414",
	I0729 19:10:10.696370 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696376 1091282 command_runner.go:130] >         "value": "65535"
	I0729 19:10:10.696382 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696388 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696391 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696397 1091282 command_runner.go:130] >       "pinned": true
	I0729 19:10:10.696400 1091282 command_runner.go:130] >     }
	I0729 19:10:10.696404 1091282 command_runner.go:130] >   ]
	I0729 19:10:10.696407 1091282 command_runner.go:130] > }
	I0729 19:10:10.696531 1091282 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:10:10.696542 1091282 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:10:10.696550 1091282 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.30.3 crio true true} ...
	I0729 19:10:10.696665 1091282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-370772 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:10:10.696737 1091282 ssh_runner.go:195] Run: crio config
	I0729 19:10:10.744089 1091282 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 19:10:10.744121 1091282 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 19:10:10.744131 1091282 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 19:10:10.744136 1091282 command_runner.go:130] > #
	I0729 19:10:10.744148 1091282 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 19:10:10.744158 1091282 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 19:10:10.744167 1091282 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 19:10:10.744174 1091282 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 19:10:10.744178 1091282 command_runner.go:130] > # reload'.
	I0729 19:10:10.744183 1091282 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 19:10:10.744190 1091282 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 19:10:10.744196 1091282 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 19:10:10.744202 1091282 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 19:10:10.744212 1091282 command_runner.go:130] > [crio]
	I0729 19:10:10.744225 1091282 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 19:10:10.744236 1091282 command_runner.go:130] > # containers images, in this directory.
	I0729 19:10:10.744245 1091282 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 19:10:10.744263 1091282 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 19:10:10.744562 1091282 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 19:10:10.744580 1091282 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 19:10:10.744916 1091282 command_runner.go:130] > # imagestore = ""
	I0729 19:10:10.744937 1091282 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 19:10:10.744949 1091282 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 19:10:10.745004 1091282 command_runner.go:130] > storage_driver = "overlay"
	I0729 19:10:10.745019 1091282 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 19:10:10.745032 1091282 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 19:10:10.745040 1091282 command_runner.go:130] > storage_option = [
	I0729 19:10:10.745175 1091282 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 19:10:10.745189 1091282 command_runner.go:130] > ]
	I0729 19:10:10.745200 1091282 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 19:10:10.745222 1091282 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 19:10:10.745415 1091282 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 19:10:10.745435 1091282 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 19:10:10.745444 1091282 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 19:10:10.745456 1091282 command_runner.go:130] > # always happen on a node reboot
	I0729 19:10:10.745711 1091282 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 19:10:10.745735 1091282 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 19:10:10.745747 1091282 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 19:10:10.745755 1091282 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 19:10:10.745844 1091282 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 19:10:10.745863 1091282 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 19:10:10.745877 1091282 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 19:10:10.746072 1091282 command_runner.go:130] > # internal_wipe = true
	I0729 19:10:10.746084 1091282 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 19:10:10.746090 1091282 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 19:10:10.746304 1091282 command_runner.go:130] > # internal_repair = false
	I0729 19:10:10.746318 1091282 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 19:10:10.746328 1091282 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 19:10:10.746337 1091282 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 19:10:10.746517 1091282 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 19:10:10.746533 1091282 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 19:10:10.746539 1091282 command_runner.go:130] > [crio.api]
	I0729 19:10:10.746547 1091282 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 19:10:10.746786 1091282 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 19:10:10.746801 1091282 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 19:10:10.747076 1091282 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 19:10:10.747092 1091282 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 19:10:10.747100 1091282 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 19:10:10.747347 1091282 command_runner.go:130] > # stream_port = "0"
	I0729 19:10:10.747362 1091282 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 19:10:10.747571 1091282 command_runner.go:130] > # stream_enable_tls = false
	I0729 19:10:10.747587 1091282 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 19:10:10.747886 1091282 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 19:10:10.747902 1091282 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 19:10:10.747912 1091282 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 19:10:10.747918 1091282 command_runner.go:130] > # minutes.
	I0729 19:10:10.748059 1091282 command_runner.go:130] > # stream_tls_cert = ""
	I0729 19:10:10.748081 1091282 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 19:10:10.748092 1091282 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 19:10:10.748256 1091282 command_runner.go:130] > # stream_tls_key = ""
	I0729 19:10:10.748266 1091282 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 19:10:10.748272 1091282 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 19:10:10.748295 1091282 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 19:10:10.748470 1091282 command_runner.go:130] > # stream_tls_ca = ""
	I0729 19:10:10.748482 1091282 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 19:10:10.748570 1091282 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 19:10:10.748582 1091282 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 19:10:10.748792 1091282 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 19:10:10.748802 1091282 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 19:10:10.748807 1091282 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 19:10:10.748811 1091282 command_runner.go:130] > [crio.runtime]
	I0729 19:10:10.748818 1091282 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 19:10:10.748823 1091282 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 19:10:10.748829 1091282 command_runner.go:130] > # "nofile=1024:2048"
	I0729 19:10:10.748835 1091282 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 19:10:10.748908 1091282 command_runner.go:130] > # default_ulimits = [
	I0729 19:10:10.749065 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.749086 1091282 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 19:10:10.749267 1091282 command_runner.go:130] > # no_pivot = false
	I0729 19:10:10.749278 1091282 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 19:10:10.749284 1091282 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 19:10:10.749703 1091282 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 19:10:10.749713 1091282 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 19:10:10.749718 1091282 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 19:10:10.749726 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 19:10:10.749732 1091282 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 19:10:10.749745 1091282 command_runner.go:130] > # Cgroup setting for conmon
	I0729 19:10:10.749756 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 19:10:10.749763 1091282 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 19:10:10.749769 1091282 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 19:10:10.749776 1091282 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 19:10:10.749782 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 19:10:10.749789 1091282 command_runner.go:130] > conmon_env = [
	I0729 19:10:10.749794 1091282 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 19:10:10.749803 1091282 command_runner.go:130] > ]
	I0729 19:10:10.749810 1091282 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 19:10:10.749821 1091282 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 19:10:10.749833 1091282 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 19:10:10.749841 1091282 command_runner.go:130] > # default_env = [
	I0729 19:10:10.749849 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.749859 1091282 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 19:10:10.749872 1091282 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 19:10:10.749881 1091282 command_runner.go:130] > # selinux = false
	I0729 19:10:10.749887 1091282 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 19:10:10.749899 1091282 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 19:10:10.749911 1091282 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 19:10:10.749921 1091282 command_runner.go:130] > # seccomp_profile = ""
	I0729 19:10:10.749930 1091282 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 19:10:10.749942 1091282 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 19:10:10.749952 1091282 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 19:10:10.749960 1091282 command_runner.go:130] > # which might increase security.
	I0729 19:10:10.749964 1091282 command_runner.go:130] > # This option is currently deprecated,
	I0729 19:10:10.749972 1091282 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 19:10:10.749977 1091282 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 19:10:10.749986 1091282 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 19:10:10.749999 1091282 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 19:10:10.750013 1091282 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 19:10:10.750023 1091282 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 19:10:10.750034 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.750044 1091282 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 19:10:10.750053 1091282 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 19:10:10.750063 1091282 command_runner.go:130] > # the cgroup blockio controller.
	I0729 19:10:10.750071 1091282 command_runner.go:130] > # blockio_config_file = ""
	I0729 19:10:10.750083 1091282 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 19:10:10.750093 1091282 command_runner.go:130] > # blockio parameters.
	I0729 19:10:10.750101 1091282 command_runner.go:130] > # blockio_reload = false
	I0729 19:10:10.750114 1091282 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 19:10:10.750123 1091282 command_runner.go:130] > # irqbalance daemon.
	I0729 19:10:10.750131 1091282 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 19:10:10.750143 1091282 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 19:10:10.750155 1091282 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 19:10:10.750168 1091282 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 19:10:10.750188 1091282 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 19:10:10.750200 1091282 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 19:10:10.750207 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.750215 1091282 command_runner.go:130] > # rdt_config_file = ""
	I0729 19:10:10.750223 1091282 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 19:10:10.750233 1091282 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 19:10:10.750276 1091282 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 19:10:10.750289 1091282 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 19:10:10.750298 1091282 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 19:10:10.750308 1091282 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 19:10:10.750317 1091282 command_runner.go:130] > # will be added.
	I0729 19:10:10.750324 1091282 command_runner.go:130] > # default_capabilities = [
	I0729 19:10:10.750333 1091282 command_runner.go:130] > # 	"CHOWN",
	I0729 19:10:10.750340 1091282 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 19:10:10.750348 1091282 command_runner.go:130] > # 	"FSETID",
	I0729 19:10:10.750355 1091282 command_runner.go:130] > # 	"FOWNER",
	I0729 19:10:10.750364 1091282 command_runner.go:130] > # 	"SETGID",
	I0729 19:10:10.750370 1091282 command_runner.go:130] > # 	"SETUID",
	I0729 19:10:10.750379 1091282 command_runner.go:130] > # 	"SETPCAP",
	I0729 19:10:10.750386 1091282 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 19:10:10.750394 1091282 command_runner.go:130] > # 	"KILL",
	I0729 19:10:10.750398 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750404 1091282 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 19:10:10.750413 1091282 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 19:10:10.750418 1091282 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 19:10:10.750426 1091282 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 19:10:10.750432 1091282 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 19:10:10.750437 1091282 command_runner.go:130] > default_sysctls = [
	I0729 19:10:10.750443 1091282 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 19:10:10.750451 1091282 command_runner.go:130] > ]
	I0729 19:10:10.750458 1091282 command_runner.go:130] > # List of devices on the host that a
	I0729 19:10:10.750471 1091282 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 19:10:10.750478 1091282 command_runner.go:130] > # allowed_devices = [
	I0729 19:10:10.750485 1091282 command_runner.go:130] > # 	"/dev/fuse",
	I0729 19:10:10.750490 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750501 1091282 command_runner.go:130] > # List of additional devices. specified as
	I0729 19:10:10.750515 1091282 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 19:10:10.750526 1091282 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 19:10:10.750534 1091282 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 19:10:10.750543 1091282 command_runner.go:130] > # additional_devices = [
	I0729 19:10:10.750551 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750561 1091282 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 19:10:10.750572 1091282 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 19:10:10.750581 1091282 command_runner.go:130] > # 	"/etc/cdi",
	I0729 19:10:10.750588 1091282 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 19:10:10.750595 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750615 1091282 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 19:10:10.750626 1091282 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 19:10:10.750635 1091282 command_runner.go:130] > # Defaults to false.
	I0729 19:10:10.750643 1091282 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 19:10:10.750656 1091282 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 19:10:10.750664 1091282 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 19:10:10.750670 1091282 command_runner.go:130] > # hooks_dir = [
	I0729 19:10:10.750680 1091282 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 19:10:10.750689 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750700 1091282 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 19:10:10.750713 1091282 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 19:10:10.750724 1091282 command_runner.go:130] > # its default mounts from the following two files:
	I0729 19:10:10.750731 1091282 command_runner.go:130] > #
	I0729 19:10:10.750741 1091282 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 19:10:10.750753 1091282 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 19:10:10.750764 1091282 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 19:10:10.750773 1091282 command_runner.go:130] > #
	I0729 19:10:10.750783 1091282 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 19:10:10.750796 1091282 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 19:10:10.750808 1091282 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 19:10:10.750819 1091282 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 19:10:10.750827 1091282 command_runner.go:130] > #
	I0729 19:10:10.750834 1091282 command_runner.go:130] > # default_mounts_file = ""
	I0729 19:10:10.750856 1091282 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 19:10:10.750871 1091282 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 19:10:10.750881 1091282 command_runner.go:130] > pids_limit = 1024
	I0729 19:10:10.750891 1091282 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 19:10:10.750903 1091282 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 19:10:10.750916 1091282 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 19:10:10.750931 1091282 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 19:10:10.750940 1091282 command_runner.go:130] > # log_size_max = -1
	I0729 19:10:10.750951 1091282 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 19:10:10.750961 1091282 command_runner.go:130] > # log_to_journald = false
	I0729 19:10:10.750970 1091282 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 19:10:10.750980 1091282 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 19:10:10.750991 1091282 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 19:10:10.751003 1091282 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 19:10:10.751012 1091282 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 19:10:10.751021 1091282 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 19:10:10.751029 1091282 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 19:10:10.751038 1091282 command_runner.go:130] > # read_only = false
	I0729 19:10:10.751047 1091282 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 19:10:10.751062 1091282 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 19:10:10.751070 1091282 command_runner.go:130] > # live configuration reload.
	I0729 19:10:10.751079 1091282 command_runner.go:130] > # log_level = "info"
	I0729 19:10:10.751088 1091282 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 19:10:10.751098 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.751107 1091282 command_runner.go:130] > # log_filter = ""
	I0729 19:10:10.751122 1091282 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 19:10:10.751134 1091282 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 19:10:10.751143 1091282 command_runner.go:130] > # separated by comma.
	I0729 19:10:10.751155 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751164 1091282 command_runner.go:130] > # uid_mappings = ""
	I0729 19:10:10.751170 1091282 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 19:10:10.751177 1091282 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 19:10:10.751181 1091282 command_runner.go:130] > # separated by comma.
	I0729 19:10:10.751188 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751194 1091282 command_runner.go:130] > # gid_mappings = ""
	I0729 19:10:10.751199 1091282 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 19:10:10.751206 1091282 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 19:10:10.751212 1091282 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 19:10:10.751222 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751228 1091282 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 19:10:10.751237 1091282 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 19:10:10.751250 1091282 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 19:10:10.751263 1091282 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 19:10:10.751274 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751283 1091282 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 19:10:10.751293 1091282 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 19:10:10.751305 1091282 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 19:10:10.751317 1091282 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 19:10:10.751330 1091282 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 19:10:10.751340 1091282 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 19:10:10.751353 1091282 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 19:10:10.751364 1091282 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 19:10:10.751371 1091282 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 19:10:10.751380 1091282 command_runner.go:130] > drop_infra_ctr = false
	I0729 19:10:10.751390 1091282 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 19:10:10.751402 1091282 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 19:10:10.751416 1091282 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 19:10:10.751425 1091282 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 19:10:10.751436 1091282 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 19:10:10.751448 1091282 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 19:10:10.751459 1091282 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 19:10:10.751467 1091282 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 19:10:10.751477 1091282 command_runner.go:130] > # shared_cpuset = ""
	I0729 19:10:10.751486 1091282 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 19:10:10.751497 1091282 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 19:10:10.751506 1091282 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 19:10:10.751517 1091282 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 19:10:10.751527 1091282 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 19:10:10.751536 1091282 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 19:10:10.751548 1091282 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 19:10:10.751558 1091282 command_runner.go:130] > # enable_criu_support = false
	I0729 19:10:10.751566 1091282 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 19:10:10.751577 1091282 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 19:10:10.751586 1091282 command_runner.go:130] > # enable_pod_events = false
	I0729 19:10:10.751605 1091282 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 19:10:10.751613 1091282 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 19:10:10.751619 1091282 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 19:10:10.751625 1091282 command_runner.go:130] > # default_runtime = "runc"
	I0729 19:10:10.751630 1091282 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 19:10:10.751639 1091282 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 19:10:10.751648 1091282 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 19:10:10.751655 1091282 command_runner.go:130] > # creation as a file is not desired either.
	I0729 19:10:10.751663 1091282 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 19:10:10.751670 1091282 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 19:10:10.751678 1091282 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 19:10:10.751685 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.751697 1091282 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 19:10:10.751710 1091282 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 19:10:10.751723 1091282 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 19:10:10.751732 1091282 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 19:10:10.751740 1091282 command_runner.go:130] > #
	I0729 19:10:10.751747 1091282 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 19:10:10.751758 1091282 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 19:10:10.751783 1091282 command_runner.go:130] > # runtime_type = "oci"
	I0729 19:10:10.751796 1091282 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 19:10:10.751804 1091282 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 19:10:10.751811 1091282 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 19:10:10.751818 1091282 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 19:10:10.751823 1091282 command_runner.go:130] > # monitor_env = []
	I0729 19:10:10.751831 1091282 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 19:10:10.751839 1091282 command_runner.go:130] > # allowed_annotations = []
	I0729 19:10:10.751847 1091282 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 19:10:10.751855 1091282 command_runner.go:130] > # Where:
	I0729 19:10:10.751863 1091282 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 19:10:10.751876 1091282 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 19:10:10.751887 1091282 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 19:10:10.751899 1091282 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 19:10:10.751909 1091282 command_runner.go:130] > #   in $PATH.
	I0729 19:10:10.751922 1091282 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 19:10:10.751930 1091282 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 19:10:10.751944 1091282 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 19:10:10.751953 1091282 command_runner.go:130] > #   state.
	I0729 19:10:10.751963 1091282 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 19:10:10.751976 1091282 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 19:10:10.751988 1091282 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 19:10:10.751997 1091282 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 19:10:10.752003 1091282 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 19:10:10.752015 1091282 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 19:10:10.752026 1091282 command_runner.go:130] > #   The currently recognized values are:
	I0729 19:10:10.752037 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 19:10:10.752051 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 19:10:10.752067 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 19:10:10.752078 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 19:10:10.752091 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 19:10:10.752104 1091282 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 19:10:10.752117 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 19:10:10.752130 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 19:10:10.752141 1091282 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 19:10:10.752152 1091282 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 19:10:10.752161 1091282 command_runner.go:130] > #   deprecated option "conmon".
	I0729 19:10:10.752172 1091282 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 19:10:10.752182 1091282 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 19:10:10.752195 1091282 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 19:10:10.752206 1091282 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 19:10:10.752218 1091282 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 19:10:10.752228 1091282 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 19:10:10.752240 1091282 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 19:10:10.752254 1091282 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 19:10:10.752259 1091282 command_runner.go:130] > #
	I0729 19:10:10.752270 1091282 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 19:10:10.752279 1091282 command_runner.go:130] > #
	I0729 19:10:10.752289 1091282 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 19:10:10.752301 1091282 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 19:10:10.752309 1091282 command_runner.go:130] > #
	I0729 19:10:10.752317 1091282 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 19:10:10.752336 1091282 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 19:10:10.752346 1091282 command_runner.go:130] > #
	I0729 19:10:10.752357 1091282 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 19:10:10.752365 1091282 command_runner.go:130] > # feature.
	I0729 19:10:10.752370 1091282 command_runner.go:130] > #
	I0729 19:10:10.752381 1091282 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 19:10:10.752391 1091282 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 19:10:10.752398 1091282 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 19:10:10.752409 1091282 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 19:10:10.752420 1091282 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 19:10:10.752427 1091282 command_runner.go:130] > #
	I0729 19:10:10.752436 1091282 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 19:10:10.752456 1091282 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 19:10:10.752463 1091282 command_runner.go:130] > #
	I0729 19:10:10.752473 1091282 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 19:10:10.752484 1091282 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 19:10:10.752493 1091282 command_runner.go:130] > #
	I0729 19:10:10.752502 1091282 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 19:10:10.752515 1091282 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 19:10:10.752521 1091282 command_runner.go:130] > # limitation.
	I0729 19:10:10.752532 1091282 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 19:10:10.752539 1091282 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 19:10:10.752548 1091282 command_runner.go:130] > runtime_type = "oci"
	I0729 19:10:10.752555 1091282 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 19:10:10.752564 1091282 command_runner.go:130] > runtime_config_path = ""
	I0729 19:10:10.752574 1091282 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 19:10:10.752580 1091282 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 19:10:10.752585 1091282 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 19:10:10.752591 1091282 command_runner.go:130] > monitor_env = [
	I0729 19:10:10.752603 1091282 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 19:10:10.752612 1091282 command_runner.go:130] > ]
	I0729 19:10:10.752620 1091282 command_runner.go:130] > privileged_without_host_devices = false
	I0729 19:10:10.752631 1091282 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 19:10:10.752642 1091282 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 19:10:10.752655 1091282 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 19:10:10.752669 1091282 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 19:10:10.752683 1091282 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 19:10:10.752692 1091282 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 19:10:10.752709 1091282 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 19:10:10.752724 1091282 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 19:10:10.752737 1091282 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 19:10:10.752748 1091282 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 19:10:10.752754 1091282 command_runner.go:130] > # Example:
	I0729 19:10:10.752761 1091282 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 19:10:10.752773 1091282 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 19:10:10.752781 1091282 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 19:10:10.752788 1091282 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 19:10:10.752792 1091282 command_runner.go:130] > # cpuset = 0
	I0729 19:10:10.752796 1091282 command_runner.go:130] > # cpushares = "0-1"
	I0729 19:10:10.752801 1091282 command_runner.go:130] > # Where:
	I0729 19:10:10.752811 1091282 command_runner.go:130] > # The workload name is workload-type.
	I0729 19:10:10.752823 1091282 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 19:10:10.752832 1091282 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 19:10:10.752841 1091282 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 19:10:10.752853 1091282 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 19:10:10.752862 1091282 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 19:10:10.752870 1091282 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 19:10:10.752878 1091282 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 19:10:10.752882 1091282 command_runner.go:130] > # Default value is set to true
	I0729 19:10:10.752888 1091282 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 19:10:10.752896 1091282 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 19:10:10.752904 1091282 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 19:10:10.752911 1091282 command_runner.go:130] > # Default value is set to 'false'
	I0729 19:10:10.752918 1091282 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 19:10:10.752928 1091282 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 19:10:10.752936 1091282 command_runner.go:130] > #
	I0729 19:10:10.752945 1091282 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 19:10:10.752957 1091282 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 19:10:10.752966 1091282 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 19:10:10.752978 1091282 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 19:10:10.752990 1091282 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 19:10:10.752999 1091282 command_runner.go:130] > [crio.image]
	I0729 19:10:10.753012 1091282 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 19:10:10.753024 1091282 command_runner.go:130] > # default_transport = "docker://"
	I0729 19:10:10.753036 1091282 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 19:10:10.753049 1091282 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 19:10:10.753057 1091282 command_runner.go:130] > # global_auth_file = ""
	I0729 19:10:10.753062 1091282 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 19:10:10.753072 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.753084 1091282 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 19:10:10.753097 1091282 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 19:10:10.753109 1091282 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 19:10:10.753120 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.753130 1091282 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 19:10:10.753141 1091282 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 19:10:10.753151 1091282 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 19:10:10.753169 1091282 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 19:10:10.753182 1091282 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 19:10:10.753192 1091282 command_runner.go:130] > # pause_command = "/pause"
	I0729 19:10:10.753204 1091282 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 19:10:10.753217 1091282 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 19:10:10.753229 1091282 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 19:10:10.753241 1091282 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 19:10:10.753249 1091282 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 19:10:10.753259 1091282 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 19:10:10.753269 1091282 command_runner.go:130] > # pinned_images = [
	I0729 19:10:10.753277 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753289 1091282 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 19:10:10.753302 1091282 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 19:10:10.753313 1091282 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 19:10:10.753325 1091282 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 19:10:10.753333 1091282 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 19:10:10.753342 1091282 command_runner.go:130] > # signature_policy = ""
	I0729 19:10:10.753353 1091282 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 19:10:10.753367 1091282 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 19:10:10.753380 1091282 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 19:10:10.753397 1091282 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 19:10:10.753409 1091282 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 19:10:10.753420 1091282 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 19:10:10.753432 1091282 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 19:10:10.753443 1091282 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 19:10:10.753452 1091282 command_runner.go:130] > # changing them here.
	I0729 19:10:10.753462 1091282 command_runner.go:130] > # insecure_registries = [
	I0729 19:10:10.753467 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753480 1091282 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 19:10:10.753491 1091282 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 19:10:10.753501 1091282 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 19:10:10.753511 1091282 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 19:10:10.753521 1091282 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 19:10:10.753531 1091282 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 19:10:10.753537 1091282 command_runner.go:130] > # CNI plugins.
	I0729 19:10:10.753542 1091282 command_runner.go:130] > [crio.network]
	I0729 19:10:10.753555 1091282 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 19:10:10.753570 1091282 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 19:10:10.753580 1091282 command_runner.go:130] > # cni_default_network = ""
	I0729 19:10:10.753590 1091282 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 19:10:10.753604 1091282 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 19:10:10.753615 1091282 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 19:10:10.753625 1091282 command_runner.go:130] > # plugin_dirs = [
	I0729 19:10:10.753631 1091282 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 19:10:10.753635 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753640 1091282 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 19:10:10.753649 1091282 command_runner.go:130] > [crio.metrics]
	I0729 19:10:10.753657 1091282 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 19:10:10.753667 1091282 command_runner.go:130] > enable_metrics = true
	I0729 19:10:10.753674 1091282 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 19:10:10.753685 1091282 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 19:10:10.753698 1091282 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 19:10:10.753710 1091282 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 19:10:10.753722 1091282 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 19:10:10.753732 1091282 command_runner.go:130] > # metrics_collectors = [
	I0729 19:10:10.753738 1091282 command_runner.go:130] > # 	"operations",
	I0729 19:10:10.753743 1091282 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 19:10:10.753752 1091282 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 19:10:10.753764 1091282 command_runner.go:130] > # 	"operations_errors",
	I0729 19:10:10.753774 1091282 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 19:10:10.753784 1091282 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 19:10:10.753794 1091282 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 19:10:10.753804 1091282 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 19:10:10.753814 1091282 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 19:10:10.753821 1091282 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 19:10:10.753828 1091282 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 19:10:10.753832 1091282 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 19:10:10.753841 1091282 command_runner.go:130] > # 	"containers_oom_total",
	I0729 19:10:10.753848 1091282 command_runner.go:130] > # 	"containers_oom",
	I0729 19:10:10.753858 1091282 command_runner.go:130] > # 	"processes_defunct",
	I0729 19:10:10.753864 1091282 command_runner.go:130] > # 	"operations_total",
	I0729 19:10:10.753874 1091282 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 19:10:10.753882 1091282 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 19:10:10.753891 1091282 command_runner.go:130] > # 	"operations_errors_total",
	I0729 19:10:10.753899 1091282 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 19:10:10.753908 1091282 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 19:10:10.753915 1091282 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 19:10:10.753924 1091282 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 19:10:10.753929 1091282 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 19:10:10.753937 1091282 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 19:10:10.753945 1091282 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 19:10:10.753955 1091282 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 19:10:10.753963 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753973 1091282 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 19:10:10.753982 1091282 command_runner.go:130] > # metrics_port = 9090
	I0729 19:10:10.753992 1091282 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 19:10:10.754001 1091282 command_runner.go:130] > # metrics_socket = ""
	I0729 19:10:10.754009 1091282 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 19:10:10.754021 1091282 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 19:10:10.754033 1091282 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 19:10:10.754043 1091282 command_runner.go:130] > # certificate on any modification event.
	I0729 19:10:10.754052 1091282 command_runner.go:130] > # metrics_cert = ""
	I0729 19:10:10.754064 1091282 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 19:10:10.754075 1091282 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 19:10:10.754083 1091282 command_runner.go:130] > # metrics_key = ""
	I0729 19:10:10.754095 1091282 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 19:10:10.754104 1091282 command_runner.go:130] > [crio.tracing]
	I0729 19:10:10.754110 1091282 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 19:10:10.754116 1091282 command_runner.go:130] > # enable_tracing = false
	I0729 19:10:10.754121 1091282 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 19:10:10.754129 1091282 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 19:10:10.754135 1091282 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 19:10:10.754140 1091282 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 19:10:10.754145 1091282 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 19:10:10.754150 1091282 command_runner.go:130] > [crio.nri]
	I0729 19:10:10.754155 1091282 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 19:10:10.754159 1091282 command_runner.go:130] > # enable_nri = false
	I0729 19:10:10.754163 1091282 command_runner.go:130] > # NRI socket to listen on.
	I0729 19:10:10.754167 1091282 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 19:10:10.754173 1091282 command_runner.go:130] > # NRI plugin directory to use.
	I0729 19:10:10.754178 1091282 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 19:10:10.754183 1091282 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 19:10:10.754187 1091282 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 19:10:10.754194 1091282 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 19:10:10.754199 1091282 command_runner.go:130] > # nri_disable_connections = false
	I0729 19:10:10.754206 1091282 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 19:10:10.754210 1091282 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 19:10:10.754218 1091282 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 19:10:10.754222 1091282 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 19:10:10.754230 1091282 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 19:10:10.754234 1091282 command_runner.go:130] > [crio.stats]
	I0729 19:10:10.754239 1091282 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 19:10:10.754246 1091282 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 19:10:10.754250 1091282 command_runner.go:130] > # stats_collection_period = 0
	I0729 19:10:10.754283 1091282 command_runner.go:130] ! time="2024-07-29 19:10:10.712761984Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 19:10:10.754299 1091282 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 19:10:10.754418 1091282 cni.go:84] Creating CNI manager for ""
	I0729 19:10:10.754426 1091282 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 19:10:10.754436 1091282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:10:10.754471 1091282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370772 NodeName:multinode-370772 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:10:10.754592 1091282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370772"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:10:10.754660 1091282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:10:10.765285 1091282 command_runner.go:130] > kubeadm
	I0729 19:10:10.765300 1091282 command_runner.go:130] > kubectl
	I0729 19:10:10.765304 1091282 command_runner.go:130] > kubelet
	I0729 19:10:10.765322 1091282 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:10:10.765388 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:10:10.775120 1091282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 19:10:10.792144 1091282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:10:10.808790 1091282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 19:10:10.825457 1091282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0729 19:10:10.829618 1091282 command_runner.go:130] > 192.168.39.180	control-plane.minikube.internal
	I0729 19:10:10.829698 1091282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:10:10.966511 1091282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:10:10.982251 1091282 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772 for IP: 192.168.39.180
	I0729 19:10:10.982283 1091282 certs.go:194] generating shared ca certs ...
	I0729 19:10:10.982305 1091282 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:10:10.982513 1091282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:10:10.982584 1091282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:10:10.982604 1091282 certs.go:256] generating profile certs ...
	I0729 19:10:10.982726 1091282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/client.key
	I0729 19:10:10.982802 1091282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key.86ff478d
	I0729 19:10:10.982840 1091282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key
	I0729 19:10:10.982871 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 19:10:10.982895 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 19:10:10.982913 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 19:10:10.982930 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 19:10:10.982947 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 19:10:10.982961 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 19:10:10.982990 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 19:10:10.983012 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 19:10:10.983082 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:10:10.983114 1091282 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:10:10.983123 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:10:10.983144 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:10:10.983167 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:10:10.983191 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:10:10.983229 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:10:10.983255 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:10.983268 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 19:10:10.983283 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 19:10:10.983953 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:10:11.007668 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:10:11.031040 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:10:11.053953 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:10:11.077660 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:10:11.101599 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:10:11.124480 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:10:11.148926 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:10:11.173286 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:10:11.196133 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:10:11.219306 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:10:11.242710 1091282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:10:11.259031 1091282 ssh_runner.go:195] Run: openssl version
	I0729 19:10:11.265245 1091282 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 19:10:11.265463 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:10:11.276411 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280722 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280918 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280970 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.286405 1091282 command_runner.go:130] > b5213941
	I0729 19:10:11.286601 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:10:11.295475 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:10:11.305627 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309810 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309865 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309906 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.315057 1091282 command_runner.go:130] > 51391683
	I0729 19:10:11.315267 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:10:11.324346 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:10:11.334545 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338578 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338677 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338714 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.343946 1091282 command_runner.go:130] > 3ec20f2e
	I0729 19:10:11.344122 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:10:11.353492 1091282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:10:11.357992 1091282 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:10:11.358010 1091282 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 19:10:11.358019 1091282 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0729 19:10:11.358029 1091282 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 19:10:11.358038 1091282 command_runner.go:130] > Access: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358047 1091282 command_runner.go:130] > Modify: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358060 1091282 command_runner.go:130] > Change: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358068 1091282 command_runner.go:130] >  Birth: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358116 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:10:11.364132 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.364258 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:10:11.370257 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.370322 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:10:11.376088 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.376319 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:10:11.381994 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.382191 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:10:11.387878 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.388069 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:10:11.393955 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.394021 1091282 kubeadm.go:392] StartCluster: {Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:10:11.394131 1091282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:10:11.394177 1091282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:10:11.439112 1091282 command_runner.go:130] > 4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073
	I0729 19:10:11.439188 1091282 command_runner.go:130] > 407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae
	I0729 19:10:11.439200 1091282 command_runner.go:130] > b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6
	I0729 19:10:11.439301 1091282 command_runner.go:130] > 4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582
	I0729 19:10:11.439318 1091282 command_runner.go:130] > 96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce
	I0729 19:10:11.439324 1091282 command_runner.go:130] > 32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a
	I0729 19:10:11.439340 1091282 command_runner.go:130] > 30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3
	I0729 19:10:11.439388 1091282 command_runner.go:130] > 27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8
	I0729 19:10:11.441034 1091282 cri.go:89] found id: "4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073"
	I0729 19:10:11.441051 1091282 cri.go:89] found id: "407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae"
	I0729 19:10:11.441055 1091282 cri.go:89] found id: "b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6"
	I0729 19:10:11.441058 1091282 cri.go:89] found id: "4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582"
	I0729 19:10:11.441060 1091282 cri.go:89] found id: "96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce"
	I0729 19:10:11.441063 1091282 cri.go:89] found id: "32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a"
	I0729 19:10:11.441066 1091282 cri.go:89] found id: "30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3"
	I0729 19:10:11.441068 1091282 cri.go:89] found id: "27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8"
	I0729 19:10:11.441071 1091282 cri.go:89] found id: ""
	I0729 19:10:11.441110 1091282 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.288668972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280315288646695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=241bfca5-adbf-422c-8a54-07f1872633f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.289222647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59df50b0-a8bb-441e-86b0-0fde7857af34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.289343955Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59df50b0-a8bb-441e-86b0-0fde7857af34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.289665302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59df50b0-a8bb-441e-86b0-0fde7857af34 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.330132756Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd71d8e0-712d-4e70-8e18-0616b337a4e2 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.330216504Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd71d8e0-712d-4e70-8e18-0616b337a4e2 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.331612673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fb89bd3-bca9-4316-b105-3620fb7220dc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.332162307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280315332139097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fb89bd3-bca9-4316-b105-3620fb7220dc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.332762353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f824081-b257-4fc7-a083-de1d5b2581d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.332818653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f824081-b257-4fc7-a083-de1d5b2581d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.333213025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f824081-b257-4fc7-a083-de1d5b2581d8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.371748569Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e1ff4ff-61c3-4f2a-98b7-70d38a49c9c4 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.371830634Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e1ff4ff-61c3-4f2a-98b7-70d38a49c9c4 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.373007943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70dc4d72-a73d-43f6-aa04-b7998f6e0d71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.373563359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280315373538484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70dc4d72-a73d-43f6-aa04-b7998f6e0d71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.374050936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8300a0c1-a6d3-4e6d-84a5-85faba74ed3b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.374102946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8300a0c1-a6d3-4e6d-84a5-85faba74ed3b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.374519738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8300a0c1-a6d3-4e6d-84a5-85faba74ed3b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.414279898Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e2c7667-275c-4542-b843-67bdef20c117 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.414388620Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e2c7667-275c-4542-b843-67bdef20c117 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.415616520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37dfbb91-416a-4131-9ae0-643725f08bea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.416012252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280315415991334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37dfbb91-416a-4131-9ae0-643725f08bea name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.416648639Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e8a0a69-be29-457f-9235-288e36116b68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.416702624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e8a0a69-be29-457f-9235-288e36116b68 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:11:55 multinode-370772 crio[2866]: time="2024-07-29 19:11:55.417077353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e8a0a69-be29-457f-9235-288e36116b68 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9fabe309783f9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9cf42e004106e       busybox-fc5497c4f-6l2ht
	d5db30bce927d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   82e697a6c2c5a       coredns-7db6d8ff4d-nz959
	268098a6ccd0d       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   a8280acb8804e       kindnet-h6x45
	32e4808743b1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   0cd46ada9baca       storage-provisioner
	da78244f3f52e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   796e1654a90c8       kube-proxy-zzfbl
	765c9a17f9f9c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   154a76a952d57       etcd-multinode-370772
	e12bd9484ce29       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   972bd1bfab27d       kube-scheduler-multinode-370772
	ea775c68cb9d2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   55dd1b5e3f405       kube-apiserver-multinode-370772
	f59a71174cf94       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   8aadf1c9db855       kube-controller-manager-multinode-370772
	034df8dc5d4e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   a1a136d7b4b8b       busybox-fc5497c4f-6l2ht
	4babe0c565be1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   09d1493d30347       storage-provisioner
	407c792dec105       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   3f895614c86f9       coredns-7db6d8ff4d-nz959
	b166de409b402       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   9b7ed6edaf967       kindnet-h6x45
	4944b9573bde7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   98dd4aa9e2c91       kube-proxy-zzfbl
	96d2ade2d0aaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   622e655ecebe1       etcd-multinode-370772
	32d758bd7641c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   55d1c89e7e747       kube-controller-manager-multinode-370772
	30a85e5c7caf0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   a96b2f02f21bc       kube-scheduler-multinode-370772
	27450337c36d2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   f57101c3bec9b       kube-apiserver-multinode-370772
	
	
	==> coredns [407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae] <==
	[INFO] 10.244.1.2:41068 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529988s
	[INFO] 10.244.1.2:55196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139147s
	[INFO] 10.244.1.2:44299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075804s
	[INFO] 10.244.1.2:58773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001166902s
	[INFO] 10.244.1.2:58285 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149462s
	[INFO] 10.244.1.2:49655 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077238s
	[INFO] 10.244.1.2:58613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119513s
	[INFO] 10.244.0.3:51408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144014s
	[INFO] 10.244.0.3:38041 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097912s
	[INFO] 10.244.0.3:48464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062721s
	[INFO] 10.244.0.3:37512 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111129s
	[INFO] 10.244.1.2:35062 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140156s
	[INFO] 10.244.1.2:55518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117604s
	[INFO] 10.244.1.2:56074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087961s
	[INFO] 10.244.1.2:55503 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009786s
	[INFO] 10.244.0.3:52955 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095643s
	[INFO] 10.244.0.3:37712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120793s
	[INFO] 10.244.0.3:36162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064248s
	[INFO] 10.244.0.3:39705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082967s
	[INFO] 10.244.1.2:49699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120284s
	[INFO] 10.244.1.2:46867 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009324s
	[INFO] 10.244.1.2:49125 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091653s
	[INFO] 10.244.1.2:46593 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073597s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54134 - 61304 "HINFO IN 5850425017373081415.7069960909407928461. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010506641s
	
	
	==> describe nodes <==
	Name:               multinode-370772
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370772
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=multinode-370772
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_03_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370772
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    multinode-370772
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f1e7655c85e424c98f2c0316ed4fc96
	  System UUID:                2f1e7655-c85e-424c-98f2-c0316ed4fc96
	  Boot ID:                    40af5d61-b051-4ec0-89e6-77a27c6cf00f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6l2ht                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	  kube-system                 coredns-7db6d8ff4d-nz959                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m12s
	  kube-system                 etcd-multinode-370772                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m26s
	  kube-system                 kindnet-h6x45                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-multinode-370772             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-controller-manager-multinode-370772    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-proxy-zzfbl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-scheduler-multinode-370772             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m10s                kube-proxy       
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m26s                kubelet          Node multinode-370772 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m26s                kubelet          Node multinode-370772 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s                kubelet          Node multinode-370772 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m26s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m12s                node-controller  Node multinode-370772 event: Registered Node multinode-370772 in Controller
	  Normal  NodeReady                7m57s                kubelet          Node multinode-370772 status is now: NodeReady
	  Normal  Starting                 103s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node multinode-370772 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node multinode-370772 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node multinode-370772 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                  node-controller  Node multinode-370772 event: Registered Node multinode-370772 in Controller
	
	
	Name:               multinode-370772-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370772-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=multinode-370772
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T19_10_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:10:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370772-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:10:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:10:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:10:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:11:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-370772-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 017dc354e7894a138c268cb45894183e
	  System UUID:                017dc354-e789-4a13-8c26-8cb45894183e
	  Boot ID:                    049c8386-8f64-4514-b571-ea81423e0505
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-grv5f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-txzpl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m29s
	  kube-system                 kube-proxy-vhc6b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m24s                  kube-proxy  
	  Normal  Starting                 53s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet     Node multinode-370772-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet     Node multinode-370772-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet     Node multinode-370772-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m29s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m10s                  kubelet     Node multinode-370772-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  58s (x2 over 58s)      kubelet     Node multinode-370772-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x2 over 58s)      kubelet     Node multinode-370772-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x2 over 58s)      kubelet     Node multinode-370772-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  58s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                40s                    kubelet     Node multinode-370772-m02 status is now: NodeReady
	
	
	Name:               multinode-370772-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370772-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=multinode-370772
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T19_11_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:11:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370772-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:11:52 +0000   Mon, 29 Jul 2024 19:11:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:11:52 +0000   Mon, 29 Jul 2024 19:11:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:11:52 +0000   Mon, 29 Jul 2024 19:11:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:11:52 +0000   Mon, 29 Jul 2024 19:11:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.8
	  Hostname:    multinode-370772-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 086363edb2a24ab98ca4a65d262ecb09
	  System UUID:                086363ed-b2a2-4ab9-8ca4-a65d262ecb09
	  Boot ID:                    929b0591-8317-4030-92c9-487232fe6b91
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-99pr7       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-9n7cj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 16s                    kube-proxy  
	  Normal  Starting                 5m43s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m36s)  kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m36s)  kubelet     Node multinode-370772-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m36s)  kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m17s                  kubelet     Node multinode-370772-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-370772-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet     Node multinode-370772-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet     Node multinode-370772-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet     Node multinode-370772-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-370772-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053920] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.181558] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.112290] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.249622] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.034721] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.060386] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.060700] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498371] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.073242] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.624140] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.119350] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.208734] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 19:10] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.152318] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.200146] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.153129] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.305855] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +7.551905] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.082833] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.810222] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +5.660343] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.451749] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.253840] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	[ +19.141114] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33] <==
	{"level":"info","ts":"2024-07-29T19:10:14.421613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T19:10:14.425288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T19:10:14.421529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2024-07-29T19:10:14.42563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2024-07-29T19:10:14.425896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:10:14.428402Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:10:14.426913Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:10:14.426935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:10:14.430415Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:10:14.431478Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:10:14.43153Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:10:15.737398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.737533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.737551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.73758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.742985Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:10:15.742939Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:multinode-370772 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:10:15.744331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:10:15.744556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:10:15.744591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:10:15.745182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2024-07-29T19:10:15.746224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce] <==
	{"level":"info","ts":"2024-07-29T19:03:25.294453Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:03:25.295919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2024-07-29T19:03:25.307289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T19:04:26.82617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.005278ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157425740016295527 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-370772-m02.17e6c469e2c71d7a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-370772-m02.17e6c469e2c71d7a\" value_size:646 lease:3934053703161519156 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:04:26.826464Z","caller":"traceutil/trace.go:171","msg":"trace[1051897117] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"156.332217ms","start":"2024-07-29T19:04:26.670105Z","end":"2024-07-29T19:04:26.826438Z","steps":["trace[1051897117] 'process raft request'  (duration: 156.286754ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:04:26.826615Z","caller":"traceutil/trace.go:171","msg":"trace[1619266642] linearizableReadLoop","detail":"{readStateIndex:459; appliedIndex:458; }","duration":"231.032561ms","start":"2024-07-29T19:04:26.595546Z","end":"2024-07-29T19:04:26.826579Z","steps":["trace[1619266642] 'read index received'  (duration: 29.987072ms)","trace[1619266642] 'applied index is now lower than readState.Index'  (duration: 201.044031ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:04:26.826686Z","caller":"traceutil/trace.go:171","msg":"trace[2058037069] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"233.726298ms","start":"2024-07-29T19:04:26.592949Z","end":"2024-07-29T19:04:26.826675Z","steps":["trace[2058037069] 'process raft request'  (duration: 32.622288ms)","trace[2058037069] 'compare'  (duration: 199.80316ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:04:26.826995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.443036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-370772-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T19:04:26.827044Z","caller":"traceutil/trace.go:171","msg":"trace[1256018404] range","detail":"{range_begin:/registry/minions/multinode-370772-m02; range_end:; response_count:1; response_revision:440; }","duration":"231.526652ms","start":"2024-07-29T19:04:26.595509Z","end":"2024-07-29T19:04:26.827036Z","steps":["trace[1256018404] 'agreement among raft nodes before linearized reading'  (duration: 231.447693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:05:20.084378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.54965ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157425740016295966 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-370772-m03.17e6c47648f2c9c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-370772-m03.17e6c47648f2c9c4\" value_size:646 lease:3934053703161519803 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:05:20.084579Z","caller":"traceutil/trace.go:171","msg":"trace[799750307] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:607; }","duration":"150.096656ms","start":"2024-07-29T19:05:19.934451Z","end":"2024-07-29T19:05:20.084548Z","steps":["trace[799750307] 'read index received'  (duration: 16.311551ms)","trace[799750307] 'applied index is now lower than readState.Index'  (duration: 133.784481ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:05:20.084661Z","caller":"traceutil/trace.go:171","msg":"trace[1611532323] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"187.152723ms","start":"2024-07-29T19:05:19.897499Z","end":"2024-07-29T19:05:20.084652Z","steps":["trace[1611532323] 'process raft request'  (duration: 186.993905ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:05:20.084838Z","caller":"traceutil/trace.go:171","msg":"trace[1858634684] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"239.1928ms","start":"2024-07-29T19:05:19.845631Z","end":"2024-07-29T19:05:20.084824Z","steps":["trace[1858634684] 'process raft request'  (duration: 105.123203ms)","trace[1858634684] 'compare'  (duration: 133.379279ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:05:20.084847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.379062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T19:05:20.085011Z","caller":"traceutil/trace.go:171","msg":"trace[1179851407] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:576; }","duration":"150.58537ms","start":"2024-07-29T19:05:19.934417Z","end":"2024-07-29T19:05:20.085003Z","steps":["trace[1179851407] 'agreement among raft nodes before linearized reading'  (duration: 150.347497ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:08:31.2264Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T19:08:31.226519Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-370772","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"]}
	{"level":"warn","ts":"2024-07-29T19:08:31.226651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.226738Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.326611Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.180:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.326666Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.180:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T19:08:31.326728Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b38c55c42a3b698","current-leader-member-id":"b38c55c42a3b698"}
	{"level":"info","ts":"2024-07-29T19:08:31.328886Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:08:31.329037Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:08:31.329065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-370772","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"]}
	
	
	==> kernel <==
	 19:11:55 up 9 min,  0 users,  load average: 0.49, 0.20, 0.09
	Linux multinode-370772 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08] <==
	I0729 19:11:09.822874       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:11:19.822418       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:11:19.822526       1 main.go:299] handling current node
	I0729 19:11:19.822554       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:11:19.822572       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:11:19.822710       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:11:19.822732       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:11:29.821993       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:11:29.822122       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:11:29.822448       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:11:29.822519       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:11:29.822857       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:11:29.824546       1 main.go:299] handling current node
	I0729 19:11:39.824444       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:11:39.824530       1 main.go:299] handling current node
	I0729 19:11:39.824575       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:11:39.824584       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:11:39.824792       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:11:39.824825       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.2.0/24] 
	I0729 19:11:49.825286       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:11:49.825396       1 main.go:299] handling current node
	I0729 19:11:49.825413       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:11:49.825420       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:11:49.825828       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:11:49.825838       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6] <==
	I0729 19:07:47.922825       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:07:57.923414       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:07:57.923517       1 main.go:299] handling current node
	I0729 19:07:57.923567       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:07:57.923572       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:07:57.923781       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:07:57.923805       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:07.929945       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:07.930095       1 main.go:299] handling current node
	I0729 19:08:07.930140       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:07.930161       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:07.930365       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:07.930392       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:17.931034       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:17.931196       1 main.go:299] handling current node
	I0729 19:08:17.931306       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:17.931397       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:17.931529       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:17.931618       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:27.930590       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:27.930692       1 main.go:299] handling current node
	I0729 19:08:27.930736       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:27.930755       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:27.930920       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:27.930978       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8] <==
	E0729 19:04:51.351360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.180:8443->192.168.39.1:55156: use of closed network connection
	I0729 19:08:31.236700       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0729 19:08:31.243945       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244091       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.243927       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244112       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244157       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.245135       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254841       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254932       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254970       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255059       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255293       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255869       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.256937       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258536       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258943       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258982       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259011       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259041       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259070       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259097       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259122       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.260305       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.260416       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65] <==
	I0729 19:10:17.039990       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 19:10:17.041085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 19:10:17.058998       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 19:10:17.060414       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 19:10:17.060648       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 19:10:17.064275       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 19:10:17.064321       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 19:10:17.064458       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 19:10:17.068302       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:10:17.068345       1 policy_source.go:224] refreshing policies
	I0729 19:10:17.075299       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 19:10:17.076894       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 19:10:17.078060       1 aggregator.go:165] initial CRD sync complete...
	I0729 19:10:17.078160       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 19:10:17.078192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 19:10:17.078215       1 cache.go:39] Caches are synced for autoregister controller
	E0729 19:10:17.108285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 19:10:17.944300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 19:10:19.408731       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 19:10:19.524092       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 19:10:19.548205       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 19:10:19.611671       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 19:10:19.617925       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 19:10:29.835760       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 19:10:29.880822       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a] <==
	I0729 19:04:26.844755       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m02" podCIDRs=["10.244.1.0/24"]
	I0729 19:04:28.849704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-370772-m02"
	I0729 19:04:45.050671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:04:47.168364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.247879ms"
	I0729 19:04:47.184676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.109306ms"
	I0729 19:04:47.185099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.739µs"
	I0729 19:04:47.185647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.32µs"
	I0729 19:04:47.186304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.566µs"
	I0729 19:04:48.687736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.750633ms"
	I0729 19:04:48.689520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.569µs"
	I0729 19:04:49.232208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.86866ms"
	I0729 19:04:49.232327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.272µs"
	I0729 19:05:20.089529       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:05:20.090666       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:05:20.144117       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.2.0/24"]
	I0729 19:05:23.869179       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-370772-m03"
	I0729 19:05:38.495183       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:06.792618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:07.954712       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:07.955523       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:06:07.963283       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.3.0/24"]
	I0729 19:06:25.670788       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m03"
	I0729 19:07:08.924754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:07:14.015486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.886458ms"
	I0729 19:07:14.015687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.628µs"
	
	
	==> kube-controller-manager [f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18] <==
	I0729 19:10:30.567027       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:10:30.567068       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 19:10:53.432837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.623524ms"
	I0729 19:10:53.443924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.963479ms"
	I0729 19:10:53.455909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.954878ms"
	I0729 19:10:53.456001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.643µs"
	I0729 19:10:57.556832       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m02\" does not exist"
	I0729 19:10:57.582618       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m02" podCIDRs=["10.244.1.0/24"]
	I0729 19:10:59.450144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.84µs"
	I0729 19:10:59.471722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.212µs"
	I0729 19:10:59.483622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.756µs"
	I0729 19:10:59.507309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.447µs"
	I0729 19:10:59.513670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.201µs"
	I0729 19:10:59.518833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.387µs"
	I0729 19:11:00.829471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.313µs"
	I0729 19:11:15.451420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:15.465916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.653µs"
	I0729 19:11:15.478906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.72µs"
	I0729 19:11:17.098309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.931761ms"
	I0729 19:11:17.099833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.505µs"
	I0729 19:11:33.682337       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:34.983851       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:11:34.986306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:34.995202       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.2.0/24"]
	I0729 19:11:52.548405       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	
	
	==> kube-proxy [4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582] <==
	I0729 19:03:45.035417       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:03:45.094930       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
	I0729 19:03:45.318797       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:03:45.318837       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:03:45.318853       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:03:45.323815       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:03:45.324169       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:03:45.324472       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:03:45.326781       1 config.go:192] "Starting service config controller"
	I0729 19:03:45.327795       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:03:45.327924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:03:45.328783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:03:45.333509       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:03:45.329409       1 config.go:319] "Starting node config controller"
	I0729 19:03:45.339102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:03:45.339109       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:03:45.429647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b] <==
	I0729 19:10:18.869943       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:10:18.893464       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
	I0729 19:10:18.999368       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:10:18.999421       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:10:18.999439       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:10:19.005646       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:10:19.005873       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:10:19.005886       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:10:19.016643       1 config.go:192] "Starting service config controller"
	I0729 19:10:19.016673       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:10:19.016700       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:10:19.016704       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:10:19.017112       1 config.go:319] "Starting node config controller"
	I0729 19:10:19.017118       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:10:19.117769       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:10:19.117809       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:10:19.117832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3] <==
	E0729 19:03:27.272917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:27.273023       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:03:27.273053       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:03:28.103972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:03:28.104127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:03:28.267509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:03:28.267593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 19:03:28.269209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:03:28.269363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:03:28.300741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.300786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:28.319360       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:03:28.319456       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:03:28.345779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:03:28.345896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:03:28.408402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:03:28.408451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:03:28.431153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:03:28.431272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:03:28.442652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.442754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:28.487407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.487928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 19:03:30.953426       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 19:08:31.232159       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5] <==
	I0729 19:10:14.952356       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:10:17.005640       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:10:17.005717       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:10:17.005745       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:10:17.005770       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:10:17.058837       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:10:17.058924       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:10:17.062589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:10:17.063335       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:10:17.064059       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:10:17.064560       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:10:17.163725       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:10:14 multinode-370772 kubelet[3082]: E0729 19:10:14.110518    3082 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.180:8443: connect: connection refused
	Jul 29 19:10:14 multinode-370772 kubelet[3082]: I0729 19:10:14.516083    3082 kubelet_node_status.go:73] "Attempting to register node" node="multinode-370772"
	Jul 29 19:10:17 multinode-370772 kubelet[3082]: I0729 19:10:17.152646    3082 kubelet_node_status.go:112] "Node was previously registered" node="multinode-370772"
	Jul 29 19:10:17 multinode-370772 kubelet[3082]: I0729 19:10:17.152753    3082 kubelet_node_status.go:76] "Successfully registered node" node="multinode-370772"
	Jul 29 19:10:17 multinode-370772 kubelet[3082]: I0729 19:10:17.154281    3082 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 19:10:17 multinode-370772 kubelet[3082]: I0729 19:10:17.155556    3082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 19:10:17 multinode-370772 kubelet[3082]: E0729 19:10:17.727659    3082 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-370772\" already exists" pod="kube-system/kube-controller-manager-multinode-370772"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.000964    3082 apiserver.go:52] "Watching apiserver"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.003832    3082 topology_manager.go:215] "Topology Admit Handler" podUID="98b96d50-7bc4-4e38-a093-ee0d26a7db01" podNamespace="kube-system" podName="kube-proxy-zzfbl"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.003974    3082 topology_manager.go:215] "Topology Admit Handler" podUID="bd1040ab-4ee3-42dc-8a86-9ecd40578a48" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nz959"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.004383    3082 topology_manager.go:215] "Topology Admit Handler" podUID="4a210787-5503-4b35-899f-53cc15e43d4b" podNamespace="kube-system" podName="kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.004539    3082 topology_manager.go:215] "Topology Admit Handler" podUID="de00f063-7d28-45e2-aa3a-39b8e8084dc8" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.004634    3082 topology_manager.go:215] "Topology Admit Handler" podUID="35fbaee9-23c6-47ce-9b54-e6e523cda069" podNamespace="default" podName="busybox-fc5497c4f-6l2ht"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.014311    3082 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.068069    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4a210787-5503-4b35-899f-53cc15e43d4b-cni-cfg\") pod \"kindnet-h6x45\" (UID: \"4a210787-5503-4b35-899f-53cc15e43d4b\") " pod="kube-system/kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.068187    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b96d50-7bc4-4e38-a093-ee0d26a7db01-xtables-lock\") pod \"kube-proxy-zzfbl\" (UID: \"98b96d50-7bc4-4e38-a093-ee0d26a7db01\") " pod="kube-system/kube-proxy-zzfbl"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.068290    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a210787-5503-4b35-899f-53cc15e43d4b-xtables-lock\") pod \"kindnet-h6x45\" (UID: \"4a210787-5503-4b35-899f-53cc15e43d4b\") " pod="kube-system/kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069085    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a210787-5503-4b35-899f-53cc15e43d4b-lib-modules\") pod \"kindnet-h6x45\" (UID: \"4a210787-5503-4b35-899f-53cc15e43d4b\") " pod="kube-system/kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069202    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/de00f063-7d28-45e2-aa3a-39b8e8084dc8-tmp\") pod \"storage-provisioner\" (UID: \"de00f063-7d28-45e2-aa3a-39b8e8084dc8\") " pod="kube-system/storage-provisioner"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069453    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b96d50-7bc4-4e38-a093-ee0d26a7db01-lib-modules\") pod \"kube-proxy-zzfbl\" (UID: \"98b96d50-7bc4-4e38-a093-ee0d26a7db01\") " pod="kube-system/kube-proxy-zzfbl"
	Jul 29 19:11:13 multinode-370772 kubelet[3082]: E0729 19:11:13.054377    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:11:55.004847 1092420 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-370772 -n multinode-370772
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-370772 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 stop
E0729 19:13:00.969125 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370772 stop: exit status 82 (2m0.457520894s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-370772-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-370772 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370772 status: exit status 3 (18.777338813s)

                                                
                                                
-- stdout --
	multinode-370772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370772-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:14:18.231218 1093102 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host
	E0729 19:14:18.231287 1093102 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.127:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-370772 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-370772 -n multinode-370772
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-370772 logs -n 25: (1.435033393s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772:/home/docker/cp-test_multinode-370772-m02_multinode-370772.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772 sudo cat                                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m02_multinode-370772.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03:/home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772-m03 sudo cat                                   | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp testdata/cp-test.txt                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772:/home/docker/cp-test_multinode-370772-m03_multinode-370772.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772 sudo cat                                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m03_multinode-370772.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt                       | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m02:/home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n                                                                 | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | multinode-370772-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-370772 ssh -n multinode-370772-m02 sudo cat                                   | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	|         | /home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-370772 node stop m03                                                          | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:05 UTC |
	| node    | multinode-370772 node start                                                             | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:05 UTC | 29 Jul 24 19:06 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-370772                                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	| stop    | -p multinode-370772                                                                     | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:06 UTC |                     |
	| start   | -p multinode-370772                                                                     | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:08 UTC | 29 Jul 24 19:11 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-370772                                                                | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:11 UTC |                     |
	| node    | multinode-370772 node delete                                                            | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:11 UTC | 29 Jul 24 19:11 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-370772 stop                                                                   | multinode-370772 | jenkins | v1.33.1 | 29 Jul 24 19:11 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:08:30
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:08:30.241287 1091282 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:08:30.241382 1091282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:08:30.241389 1091282 out.go:304] Setting ErrFile to fd 2...
	I0729 19:08:30.241393 1091282 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:08:30.241591 1091282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:08:30.242109 1091282 out.go:298] Setting JSON to false
	I0729 19:08:30.243038 1091282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":10262,"bootTime":1722269848,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:08:30.243095 1091282 start.go:139] virtualization: kvm guest
	I0729 19:08:30.245216 1091282 out.go:177] * [multinode-370772] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:08:30.246585 1091282 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:08:30.246648 1091282 notify.go:220] Checking for updates...
	I0729 19:08:30.248713 1091282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:08:30.249743 1091282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:08:30.250709 1091282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:08:30.251700 1091282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:08:30.252668 1091282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:08:30.254065 1091282 config.go:182] Loaded profile config "multinode-370772": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:08:30.254161 1091282 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:08:30.254539 1091282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:08:30.254594 1091282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:08:30.269788 1091282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0729 19:08:30.270299 1091282 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:08:30.270974 1091282 main.go:141] libmachine: Using API Version  1
	I0729 19:08:30.271001 1091282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:08:30.271376 1091282 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:08:30.271583 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.306421 1091282 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:08:30.307527 1091282 start.go:297] selected driver: kvm2
	I0729 19:08:30.307539 1091282 start.go:901] validating driver "kvm2" against &{Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:08:30.307683 1091282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:08:30.308027 1091282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:08:30.308096 1091282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:08:30.322662 1091282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:08:30.323423 1091282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:08:30.323457 1091282 cni.go:84] Creating CNI manager for ""
	I0729 19:08:30.323463 1091282 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 19:08:30.323517 1091282 start.go:340] cluster config:
	{Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:08:30.323639 1091282 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:08:30.325025 1091282 out.go:177] * Starting "multinode-370772" primary control-plane node in "multinode-370772" cluster
	I0729 19:08:30.325909 1091282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:08:30.325940 1091282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:08:30.325949 1091282 cache.go:56] Caching tarball of preloaded images
	I0729 19:08:30.326025 1091282 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:08:30.326044 1091282 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:08:30.326155 1091282 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/config.json ...
	I0729 19:08:30.326327 1091282 start.go:360] acquireMachinesLock for multinode-370772: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:08:30.326364 1091282 start.go:364] duration metric: took 22.127µs to acquireMachinesLock for "multinode-370772"
	I0729 19:08:30.326378 1091282 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:08:30.326386 1091282 fix.go:54] fixHost starting: 
	I0729 19:08:30.326641 1091282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:08:30.326671 1091282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:08:30.340392 1091282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39453
	I0729 19:08:30.340809 1091282 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:08:30.341252 1091282 main.go:141] libmachine: Using API Version  1
	I0729 19:08:30.341272 1091282 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:08:30.341546 1091282 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:08:30.341688 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.341816 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetState
	I0729 19:08:30.343221 1091282 fix.go:112] recreateIfNeeded on multinode-370772: state=Running err=<nil>
	W0729 19:08:30.343243 1091282 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:08:30.345667 1091282 out.go:177] * Updating the running kvm2 "multinode-370772" VM ...
	I0729 19:08:30.346894 1091282 machine.go:94] provisionDockerMachine start ...
	I0729 19:08:30.346914 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:08:30.347133 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.349540 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.350038 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.350064 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.350185 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.350357 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.350498 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.350658 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.350833 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.351094 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.351111 1091282 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:08:30.463508 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370772
	
	I0729 19:08:30.463533 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.463777 1091282 buildroot.go:166] provisioning hostname "multinode-370772"
	I0729 19:08:30.463807 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.463969 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.466880 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.467521 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.467545 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.467706 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.467895 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.468050 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.468190 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.468313 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.468473 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.468485 1091282 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-370772 && echo "multinode-370772" | sudo tee /etc/hostname
	I0729 19:08:30.593960 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-370772
	
	I0729 19:08:30.593985 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.596639 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.596956 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.597000 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.597182 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.597387 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.597617 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.597779 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.597958 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:30.598156 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:30.598179 1091282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-370772' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-370772/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-370772' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:08:30.711614 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:08:30.711650 1091282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:08:30.711709 1091282 buildroot.go:174] setting up certificates
	I0729 19:08:30.711739 1091282 provision.go:84] configureAuth start
	I0729 19:08:30.711759 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetMachineName
	I0729 19:08:30.712039 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:08:30.714542 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.714972 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.714997 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.715161 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.717196 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.717509 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.717549 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.717655 1091282 provision.go:143] copyHostCerts
	I0729 19:08:30.717698 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:08:30.717745 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:08:30.717762 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:08:30.717836 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:08:30.717947 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:08:30.717972 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:08:30.717979 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:08:30.718021 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:08:30.718104 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:08:30.718127 1091282 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:08:30.718134 1091282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:08:30.718169 1091282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:08:30.718249 1091282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.multinode-370772 san=[127.0.0.1 192.168.39.180 localhost minikube multinode-370772]
	I0729 19:08:30.941171 1091282 provision.go:177] copyRemoteCerts
	I0729 19:08:30.941238 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:08:30.941269 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:30.943844 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.944166 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:30.944192 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:30.944388 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:30.944569 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:30.944756 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:30.944902 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:08:31.028288 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 19:08:31.028355 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 19:08:31.053810 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 19:08:31.053864 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:08:31.076555 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 19:08:31.076612 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:08:31.099283 1091282 provision.go:87] duration metric: took 387.527287ms to configureAuth
	I0729 19:08:31.099315 1091282 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:08:31.099541 1091282 config.go:182] Loaded profile config "multinode-370772": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:08:31.099614 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:08:31.102119 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:31.102490 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:08:31.102518 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:08:31.102667 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:08:31.102897 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:31.103072 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:08:31.103218 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:08:31.103370 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:08:31.103531 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:08:31.103544 1091282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:10:01.843905 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:10:01.843952 1091282 machine.go:97] duration metric: took 1m31.497040064s to provisionDockerMachine
	I0729 19:10:01.843973 1091282 start.go:293] postStartSetup for "multinode-370772" (driver="kvm2")
	I0729 19:10:01.843988 1091282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:10:01.844014 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:01.844451 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:10:01.844490 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:01.847610 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.848026 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:01.848066 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.848289 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:01.848491 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.848645 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:01.848758 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:01.935310 1091282 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:10:01.939699 1091282 command_runner.go:130] > NAME=Buildroot
	I0729 19:10:01.939721 1091282 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 19:10:01.939725 1091282 command_runner.go:130] > ID=buildroot
	I0729 19:10:01.939729 1091282 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 19:10:01.939734 1091282 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 19:10:01.939864 1091282 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:10:01.939893 1091282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:10:01.939953 1091282 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:10:01.940029 1091282 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:10:01.940040 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /etc/ssl/certs/10622722.pem
	I0729 19:10:01.940163 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:10:01.949916 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:10:01.975645 1091282 start.go:296] duration metric: took 131.654111ms for postStartSetup
	I0729 19:10:01.975695 1091282 fix.go:56] duration metric: took 1m31.649308965s for fixHost
	I0729 19:10:01.975719 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:01.978504 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.978887 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:01.978925 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:01.979032 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:01.979249 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.979435 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:01.979611 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:01.979765 1091282 main.go:141] libmachine: Using SSH client type: native
	I0729 19:10:01.979948 1091282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.180 22 <nil> <nil>}
	I0729 19:10:01.979959 1091282 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:10:02.096857 1091282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722280202.073473206
	
	I0729 19:10:02.096893 1091282 fix.go:216] guest clock: 1722280202.073473206
	I0729 19:10:02.096900 1091282 fix.go:229] Guest: 2024-07-29 19:10:02.073473206 +0000 UTC Remote: 2024-07-29 19:10:01.975700043 +0000 UTC m=+91.769380968 (delta=97.773163ms)
	I0729 19:10:02.096948 1091282 fix.go:200] guest clock delta is within tolerance: 97.773163ms
	I0729 19:10:02.096958 1091282 start.go:83] releasing machines lock for "multinode-370772", held for 1m31.770584081s
	I0729 19:10:02.096983 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.097300 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:10:02.099811 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.100138 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.100172 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.100304 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.100880 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.101050 1091282 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:10:02.101172 1091282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:10:02.101231 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:02.101252 1091282 ssh_runner.go:195] Run: cat /version.json
	I0729 19:10:02.101277 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:10:02.103702 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104007 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.104065 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104088 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104234 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:02.104427 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:02.104507 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:02.104530 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:02.104587 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:02.104700 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:10:02.104812 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:02.104910 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:10:02.105043 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:10:02.105199 1091282 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:10:02.204632 1091282 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 19:10:02.204685 1091282 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 19:10:02.204811 1091282 ssh_runner.go:195] Run: systemctl --version
	I0729 19:10:02.210543 1091282 command_runner.go:130] > systemd 252 (252)
	I0729 19:10:02.210582 1091282 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 19:10:02.210823 1091282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:10:02.368790 1091282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 19:10:02.377127 1091282 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 19:10:02.377275 1091282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:10:02.377340 1091282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:10:02.387561 1091282 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 19:10:02.387591 1091282 start.go:495] detecting cgroup driver to use...
	I0729 19:10:02.387686 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:10:02.407264 1091282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:10:02.421723 1091282 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:10:02.421784 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:10:02.436673 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:10:02.451028 1091282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:10:02.608282 1091282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:10:02.768607 1091282 docker.go:233] disabling docker service ...
	I0729 19:10:02.768686 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:10:02.789405 1091282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:10:02.804259 1091282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:10:02.957069 1091282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:10:03.112477 1091282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:10:03.131536 1091282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:10:03.152169 1091282 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 19:10:03.152499 1091282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:10:03.152566 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.163932 1091282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:10:03.164024 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.175642 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.186453 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.197315 1091282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:10:03.211230 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.223794 1091282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.235154 1091282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:10:03.247196 1091282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:10:03.257120 1091282 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 19:10:03.257199 1091282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:10:03.266840 1091282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:10:03.407938 1091282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:10:10.493783 1091282 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.085804296s)
	I0729 19:10:10.493820 1091282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:10:10.493868 1091282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:10:10.499199 1091282 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 19:10:10.499220 1091282 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 19:10:10.499245 1091282 command_runner.go:130] > Device: 0,22	Inode: 1328        Links: 1
	I0729 19:10:10.499257 1091282 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 19:10:10.499262 1091282 command_runner.go:130] > Access: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499268 1091282 command_runner.go:130] > Modify: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499273 1091282 command_runner.go:130] > Change: 2024-07-29 19:10:10.370314773 +0000
	I0729 19:10:10.499276 1091282 command_runner.go:130] >  Birth: -
	I0729 19:10:10.499296 1091282 start.go:563] Will wait 60s for crictl version
	I0729 19:10:10.499348 1091282 ssh_runner.go:195] Run: which crictl
	I0729 19:10:10.503038 1091282 command_runner.go:130] > /usr/bin/crictl
	I0729 19:10:10.503111 1091282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:10:10.542894 1091282 command_runner.go:130] > Version:  0.1.0
	I0729 19:10:10.542915 1091282 command_runner.go:130] > RuntimeName:  cri-o
	I0729 19:10:10.542920 1091282 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 19:10:10.542926 1091282 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 19:10:10.543934 1091282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:10:10.544014 1091282 ssh_runner.go:195] Run: crio --version
	I0729 19:10:10.574191 1091282 command_runner.go:130] > crio version 1.29.1
	I0729 19:10:10.574214 1091282 command_runner.go:130] > Version:        1.29.1
	I0729 19:10:10.574221 1091282 command_runner.go:130] > GitCommit:      unknown
	I0729 19:10:10.574225 1091282 command_runner.go:130] > GitCommitDate:  unknown
	I0729 19:10:10.574230 1091282 command_runner.go:130] > GitTreeState:   clean
	I0729 19:10:10.574235 1091282 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 19:10:10.574240 1091282 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 19:10:10.574244 1091282 command_runner.go:130] > Compiler:       gc
	I0729 19:10:10.574251 1091282 command_runner.go:130] > Platform:       linux/amd64
	I0729 19:10:10.574257 1091282 command_runner.go:130] > Linkmode:       dynamic
	I0729 19:10:10.574264 1091282 command_runner.go:130] > BuildTags:      
	I0729 19:10:10.574274 1091282 command_runner.go:130] >   containers_image_ostree_stub
	I0729 19:10:10.574280 1091282 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 19:10:10.574290 1091282 command_runner.go:130] >   btrfs_noversion
	I0729 19:10:10.574295 1091282 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 19:10:10.574299 1091282 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 19:10:10.574308 1091282 command_runner.go:130] >   seccomp
	I0729 19:10:10.574313 1091282 command_runner.go:130] > LDFlags:          unknown
	I0729 19:10:10.574320 1091282 command_runner.go:130] > SeccompEnabled:   true
	I0729 19:10:10.574323 1091282 command_runner.go:130] > AppArmorEnabled:  false
	I0729 19:10:10.575520 1091282 ssh_runner.go:195] Run: crio --version
	I0729 19:10:10.601753 1091282 command_runner.go:130] > crio version 1.29.1
	I0729 19:10:10.601777 1091282 command_runner.go:130] > Version:        1.29.1
	I0729 19:10:10.601783 1091282 command_runner.go:130] > GitCommit:      unknown
	I0729 19:10:10.601795 1091282 command_runner.go:130] > GitCommitDate:  unknown
	I0729 19:10:10.601799 1091282 command_runner.go:130] > GitTreeState:   clean
	I0729 19:10:10.601805 1091282 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 19:10:10.601811 1091282 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 19:10:10.601816 1091282 command_runner.go:130] > Compiler:       gc
	I0729 19:10:10.601823 1091282 command_runner.go:130] > Platform:       linux/amd64
	I0729 19:10:10.601829 1091282 command_runner.go:130] > Linkmode:       dynamic
	I0729 19:10:10.601840 1091282 command_runner.go:130] > BuildTags:      
	I0729 19:10:10.601847 1091282 command_runner.go:130] >   containers_image_ostree_stub
	I0729 19:10:10.601853 1091282 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 19:10:10.601863 1091282 command_runner.go:130] >   btrfs_noversion
	I0729 19:10:10.601869 1091282 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 19:10:10.601875 1091282 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 19:10:10.601879 1091282 command_runner.go:130] >   seccomp
	I0729 19:10:10.601883 1091282 command_runner.go:130] > LDFlags:          unknown
	I0729 19:10:10.601887 1091282 command_runner.go:130] > SeccompEnabled:   true
	I0729 19:10:10.601891 1091282 command_runner.go:130] > AppArmorEnabled:  false
	I0729 19:10:10.604715 1091282 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:10:10.605920 1091282 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:10:10.608528 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:10.608826 1091282 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:10:10.608847 1091282 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:10:10.609055 1091282 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:10:10.613115 1091282 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 19:10:10.613229 1091282 kubeadm.go:883] updating cluster {Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:10:10.613367 1091282 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:10:10.613410 1091282 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:10:10.659628 1091282 command_runner.go:130] > {
	I0729 19:10:10.659654 1091282 command_runner.go:130] >   "images": [
	I0729 19:10:10.659662 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659685 1091282 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 19:10:10.659692 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659701 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 19:10:10.659707 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659715 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.659728 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 19:10:10.659743 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 19:10:10.659751 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659757 1091282 command_runner.go:130] >       "size": "87165492",
	I0729 19:10:10.659767 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.659774 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.659790 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.659798 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.659807 1091282 command_runner.go:130] >     },
	I0729 19:10:10.659815 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659828 1091282 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 19:10:10.659837 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659848 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 19:10:10.659856 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659863 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.659877 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 19:10:10.659891 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 19:10:10.659899 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.659908 1091282 command_runner.go:130] >       "size": "87174707",
	I0729 19:10:10.659917 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.659930 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.659938 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.659948 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.659957 1091282 command_runner.go:130] >     },
	I0729 19:10:10.659965 1091282 command_runner.go:130] >     {
	I0729 19:10:10.659978 1091282 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 19:10:10.659986 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.659994 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 19:10:10.659997 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660002 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660009 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 19:10:10.660026 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 19:10:10.660032 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660036 1091282 command_runner.go:130] >       "size": "1363676",
	I0729 19:10:10.660040 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660046 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660050 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660056 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660059 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660064 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660070 1091282 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 19:10:10.660076 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660081 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 19:10:10.660086 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660090 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660100 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 19:10:10.660115 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 19:10:10.660121 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660125 1091282 command_runner.go:130] >       "size": "31470524",
	I0729 19:10:10.660129 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660135 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660139 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660145 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660148 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660154 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660162 1091282 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 19:10:10.660168 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660174 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 19:10:10.660179 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660183 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660192 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 19:10:10.660201 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 19:10:10.660206 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660210 1091282 command_runner.go:130] >       "size": "61245718",
	I0729 19:10:10.660213 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660219 1091282 command_runner.go:130] >       "username": "nonroot",
	I0729 19:10:10.660223 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660234 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660240 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660243 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660251 1091282 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 19:10:10.660255 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660260 1091282 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 19:10:10.660265 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660273 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660282 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 19:10:10.660291 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 19:10:10.660297 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660301 1091282 command_runner.go:130] >       "size": "150779692",
	I0729 19:10:10.660307 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660312 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660317 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660321 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660327 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660330 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660335 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660339 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660347 1091282 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 19:10:10.660353 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660357 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 19:10:10.660363 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660366 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660375 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 19:10:10.660384 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 19:10:10.660389 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660393 1091282 command_runner.go:130] >       "size": "117609954",
	I0729 19:10:10.660399 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660402 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660408 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660411 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660416 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660420 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660425 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660433 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660441 1091282 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 19:10:10.660446 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660451 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 19:10:10.660456 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660460 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660482 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 19:10:10.660492 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 19:10:10.660497 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660502 1091282 command_runner.go:130] >       "size": "112198984",
	I0729 19:10:10.660507 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660511 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660514 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660518 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660523 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660526 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660529 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660532 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660537 1091282 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 19:10:10.660541 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660545 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 19:10:10.660548 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660552 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660561 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 19:10:10.660567 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 19:10:10.660571 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660577 1091282 command_runner.go:130] >       "size": "85953945",
	I0729 19:10:10.660582 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.660588 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660593 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660599 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660603 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660607 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660616 1091282 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 19:10:10.660624 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660635 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 19:10:10.660652 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660674 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660708 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 19:10:10.660721 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 19:10:10.660727 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660731 1091282 command_runner.go:130] >       "size": "63051080",
	I0729 19:10:10.660737 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660741 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.660747 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660751 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660757 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660761 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.660766 1091282 command_runner.go:130] >     },
	I0729 19:10:10.660770 1091282 command_runner.go:130] >     {
	I0729 19:10:10.660776 1091282 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 19:10:10.660783 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.660787 1091282 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 19:10:10.660793 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660797 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.660805 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 19:10:10.660814 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 19:10:10.660819 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.660824 1091282 command_runner.go:130] >       "size": "750414",
	I0729 19:10:10.660829 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.660833 1091282 command_runner.go:130] >         "value": "65535"
	I0729 19:10:10.660838 1091282 command_runner.go:130] >       },
	I0729 19:10:10.660842 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.660848 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.660852 1091282 command_runner.go:130] >       "pinned": true
	I0729 19:10:10.660855 1091282 command_runner.go:130] >     }
	I0729 19:10:10.660858 1091282 command_runner.go:130] >   ]
	I0729 19:10:10.660861 1091282 command_runner.go:130] > }
	I0729 19:10:10.661054 1091282 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:10:10.661067 1091282 crio.go:433] Images already preloaded, skipping extraction
	I0729 19:10:10.661118 1091282 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:10:10.695353 1091282 command_runner.go:130] > {
	I0729 19:10:10.695378 1091282 command_runner.go:130] >   "images": [
	I0729 19:10:10.695383 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695396 1091282 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 19:10:10.695403 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695413 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 19:10:10.695422 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695428 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695440 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 19:10:10.695450 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 19:10:10.695456 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695464 1091282 command_runner.go:130] >       "size": "87165492",
	I0729 19:10:10.695471 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695478 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695489 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695496 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695502 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695511 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695517 1091282 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 19:10:10.695520 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695525 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 19:10:10.695529 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695533 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695539 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 19:10:10.695546 1091282 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 19:10:10.695549 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695553 1091282 command_runner.go:130] >       "size": "87174707",
	I0729 19:10:10.695556 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695568 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695574 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695577 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695580 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695584 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695597 1091282 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 19:10:10.695603 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695607 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 19:10:10.695618 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695624 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695631 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 19:10:10.695640 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 19:10:10.695643 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695647 1091282 command_runner.go:130] >       "size": "1363676",
	I0729 19:10:10.695651 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695654 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695661 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695665 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695669 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695673 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695679 1091282 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 19:10:10.695688 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695694 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 19:10:10.695698 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695703 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695710 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 19:10:10.695726 1091282 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 19:10:10.695732 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695736 1091282 command_runner.go:130] >       "size": "31470524",
	I0729 19:10:10.695740 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695744 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695747 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695751 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695754 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695757 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695763 1091282 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 19:10:10.695768 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695773 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 19:10:10.695777 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695781 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695790 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 19:10:10.695799 1091282 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 19:10:10.695804 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695809 1091282 command_runner.go:130] >       "size": "61245718",
	I0729 19:10:10.695819 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.695826 1091282 command_runner.go:130] >       "username": "nonroot",
	I0729 19:10:10.695829 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695836 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695842 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695845 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695853 1091282 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 19:10:10.695859 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695864 1091282 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 19:10:10.695869 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695873 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695882 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 19:10:10.695890 1091282 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 19:10:10.695896 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695900 1091282 command_runner.go:130] >       "size": "150779692",
	I0729 19:10:10.695907 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.695910 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.695919 1091282 command_runner.go:130] >       },
	I0729 19:10:10.695922 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.695928 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.695932 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.695938 1091282 command_runner.go:130] >     },
	I0729 19:10:10.695941 1091282 command_runner.go:130] >     {
	I0729 19:10:10.695948 1091282 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 19:10:10.695952 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.695959 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 19:10:10.695963 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695966 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.695975 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 19:10:10.695984 1091282 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 19:10:10.695990 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.695994 1091282 command_runner.go:130] >       "size": "117609954",
	I0729 19:10:10.696000 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696004 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696010 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696014 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696024 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696030 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696033 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696038 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696044 1091282 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 19:10:10.696050 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696054 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 19:10:10.696060 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696064 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696088 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 19:10:10.696098 1091282 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 19:10:10.696101 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696105 1091282 command_runner.go:130] >       "size": "112198984",
	I0729 19:10:10.696109 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696112 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696116 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696121 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696127 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696131 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696136 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696139 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696148 1091282 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 19:10:10.696152 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696156 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 19:10:10.696161 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696165 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696174 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 19:10:10.696185 1091282 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 19:10:10.696190 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696194 1091282 command_runner.go:130] >       "size": "85953945",
	I0729 19:10:10.696198 1091282 command_runner.go:130] >       "uid": null,
	I0729 19:10:10.696203 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696207 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696212 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696216 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696221 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696232 1091282 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 19:10:10.696238 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696243 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 19:10:10.696248 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696253 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696262 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 19:10:10.696270 1091282 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 19:10:10.696276 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696279 1091282 command_runner.go:130] >       "size": "63051080",
	I0729 19:10:10.696283 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696289 1091282 command_runner.go:130] >         "value": "0"
	I0729 19:10:10.696293 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696299 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696302 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696308 1091282 command_runner.go:130] >       "pinned": false
	I0729 19:10:10.696312 1091282 command_runner.go:130] >     },
	I0729 19:10:10.696317 1091282 command_runner.go:130] >     {
	I0729 19:10:10.696323 1091282 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 19:10:10.696328 1091282 command_runner.go:130] >       "repoTags": [
	I0729 19:10:10.696333 1091282 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 19:10:10.696339 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696343 1091282 command_runner.go:130] >       "repoDigests": [
	I0729 19:10:10.696351 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 19:10:10.696357 1091282 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 19:10:10.696363 1091282 command_runner.go:130] >       ],
	I0729 19:10:10.696366 1091282 command_runner.go:130] >       "size": "750414",
	I0729 19:10:10.696370 1091282 command_runner.go:130] >       "uid": {
	I0729 19:10:10.696376 1091282 command_runner.go:130] >         "value": "65535"
	I0729 19:10:10.696382 1091282 command_runner.go:130] >       },
	I0729 19:10:10.696388 1091282 command_runner.go:130] >       "username": "",
	I0729 19:10:10.696391 1091282 command_runner.go:130] >       "spec": null,
	I0729 19:10:10.696397 1091282 command_runner.go:130] >       "pinned": true
	I0729 19:10:10.696400 1091282 command_runner.go:130] >     }
	I0729 19:10:10.696404 1091282 command_runner.go:130] >   ]
	I0729 19:10:10.696407 1091282 command_runner.go:130] > }
	I0729 19:10:10.696531 1091282 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:10:10.696542 1091282 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:10:10.696550 1091282 kubeadm.go:934] updating node { 192.168.39.180 8443 v1.30.3 crio true true} ...
	I0729 19:10:10.696665 1091282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-370772 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.180
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:10:10.696737 1091282 ssh_runner.go:195] Run: crio config
	I0729 19:10:10.744089 1091282 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 19:10:10.744121 1091282 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 19:10:10.744131 1091282 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 19:10:10.744136 1091282 command_runner.go:130] > #
	I0729 19:10:10.744148 1091282 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 19:10:10.744158 1091282 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 19:10:10.744167 1091282 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 19:10:10.744174 1091282 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 19:10:10.744178 1091282 command_runner.go:130] > # reload'.
	I0729 19:10:10.744183 1091282 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 19:10:10.744190 1091282 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 19:10:10.744196 1091282 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 19:10:10.744202 1091282 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 19:10:10.744212 1091282 command_runner.go:130] > [crio]
	I0729 19:10:10.744225 1091282 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 19:10:10.744236 1091282 command_runner.go:130] > # containers images, in this directory.
	I0729 19:10:10.744245 1091282 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 19:10:10.744263 1091282 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 19:10:10.744562 1091282 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 19:10:10.744580 1091282 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 19:10:10.744916 1091282 command_runner.go:130] > # imagestore = ""
	I0729 19:10:10.744937 1091282 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 19:10:10.744949 1091282 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 19:10:10.745004 1091282 command_runner.go:130] > storage_driver = "overlay"
	I0729 19:10:10.745019 1091282 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 19:10:10.745032 1091282 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 19:10:10.745040 1091282 command_runner.go:130] > storage_option = [
	I0729 19:10:10.745175 1091282 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 19:10:10.745189 1091282 command_runner.go:130] > ]
	I0729 19:10:10.745200 1091282 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 19:10:10.745222 1091282 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 19:10:10.745415 1091282 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 19:10:10.745435 1091282 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 19:10:10.745444 1091282 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 19:10:10.745456 1091282 command_runner.go:130] > # always happen on a node reboot
	I0729 19:10:10.745711 1091282 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 19:10:10.745735 1091282 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 19:10:10.745747 1091282 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 19:10:10.745755 1091282 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 19:10:10.745844 1091282 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 19:10:10.745863 1091282 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 19:10:10.745877 1091282 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 19:10:10.746072 1091282 command_runner.go:130] > # internal_wipe = true
	I0729 19:10:10.746084 1091282 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 19:10:10.746090 1091282 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 19:10:10.746304 1091282 command_runner.go:130] > # internal_repair = false
	I0729 19:10:10.746318 1091282 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 19:10:10.746328 1091282 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 19:10:10.746337 1091282 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 19:10:10.746517 1091282 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 19:10:10.746533 1091282 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 19:10:10.746539 1091282 command_runner.go:130] > [crio.api]
	I0729 19:10:10.746547 1091282 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 19:10:10.746786 1091282 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 19:10:10.746801 1091282 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 19:10:10.747076 1091282 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 19:10:10.747092 1091282 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 19:10:10.747100 1091282 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 19:10:10.747347 1091282 command_runner.go:130] > # stream_port = "0"
	I0729 19:10:10.747362 1091282 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 19:10:10.747571 1091282 command_runner.go:130] > # stream_enable_tls = false
	I0729 19:10:10.747587 1091282 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 19:10:10.747886 1091282 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 19:10:10.747902 1091282 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 19:10:10.747912 1091282 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 19:10:10.747918 1091282 command_runner.go:130] > # minutes.
	I0729 19:10:10.748059 1091282 command_runner.go:130] > # stream_tls_cert = ""
	I0729 19:10:10.748081 1091282 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 19:10:10.748092 1091282 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 19:10:10.748256 1091282 command_runner.go:130] > # stream_tls_key = ""
	I0729 19:10:10.748266 1091282 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 19:10:10.748272 1091282 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 19:10:10.748295 1091282 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 19:10:10.748470 1091282 command_runner.go:130] > # stream_tls_ca = ""
	I0729 19:10:10.748482 1091282 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 19:10:10.748570 1091282 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 19:10:10.748582 1091282 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 19:10:10.748792 1091282 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 19:10:10.748802 1091282 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 19:10:10.748807 1091282 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 19:10:10.748811 1091282 command_runner.go:130] > [crio.runtime]
	I0729 19:10:10.748818 1091282 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 19:10:10.748823 1091282 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 19:10:10.748829 1091282 command_runner.go:130] > # "nofile=1024:2048"
	I0729 19:10:10.748835 1091282 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 19:10:10.748908 1091282 command_runner.go:130] > # default_ulimits = [
	I0729 19:10:10.749065 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.749086 1091282 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 19:10:10.749267 1091282 command_runner.go:130] > # no_pivot = false
	I0729 19:10:10.749278 1091282 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 19:10:10.749284 1091282 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 19:10:10.749703 1091282 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 19:10:10.749713 1091282 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 19:10:10.749718 1091282 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 19:10:10.749726 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 19:10:10.749732 1091282 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 19:10:10.749745 1091282 command_runner.go:130] > # Cgroup setting for conmon
	I0729 19:10:10.749756 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 19:10:10.749763 1091282 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 19:10:10.749769 1091282 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 19:10:10.749776 1091282 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 19:10:10.749782 1091282 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 19:10:10.749789 1091282 command_runner.go:130] > conmon_env = [
	I0729 19:10:10.749794 1091282 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 19:10:10.749803 1091282 command_runner.go:130] > ]
	I0729 19:10:10.749810 1091282 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 19:10:10.749821 1091282 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 19:10:10.749833 1091282 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 19:10:10.749841 1091282 command_runner.go:130] > # default_env = [
	I0729 19:10:10.749849 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.749859 1091282 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 19:10:10.749872 1091282 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 19:10:10.749881 1091282 command_runner.go:130] > # selinux = false
	I0729 19:10:10.749887 1091282 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 19:10:10.749899 1091282 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 19:10:10.749911 1091282 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 19:10:10.749921 1091282 command_runner.go:130] > # seccomp_profile = ""
	I0729 19:10:10.749930 1091282 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 19:10:10.749942 1091282 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 19:10:10.749952 1091282 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 19:10:10.749960 1091282 command_runner.go:130] > # which might increase security.
	I0729 19:10:10.749964 1091282 command_runner.go:130] > # This option is currently deprecated,
	I0729 19:10:10.749972 1091282 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 19:10:10.749977 1091282 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 19:10:10.749986 1091282 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 19:10:10.749999 1091282 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 19:10:10.750013 1091282 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 19:10:10.750023 1091282 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 19:10:10.750034 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.750044 1091282 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 19:10:10.750053 1091282 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 19:10:10.750063 1091282 command_runner.go:130] > # the cgroup blockio controller.
	I0729 19:10:10.750071 1091282 command_runner.go:130] > # blockio_config_file = ""
	I0729 19:10:10.750083 1091282 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 19:10:10.750093 1091282 command_runner.go:130] > # blockio parameters.
	I0729 19:10:10.750101 1091282 command_runner.go:130] > # blockio_reload = false
	I0729 19:10:10.750114 1091282 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 19:10:10.750123 1091282 command_runner.go:130] > # irqbalance daemon.
	I0729 19:10:10.750131 1091282 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 19:10:10.750143 1091282 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 19:10:10.750155 1091282 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 19:10:10.750168 1091282 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 19:10:10.750188 1091282 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 19:10:10.750200 1091282 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 19:10:10.750207 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.750215 1091282 command_runner.go:130] > # rdt_config_file = ""
	I0729 19:10:10.750223 1091282 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 19:10:10.750233 1091282 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 19:10:10.750276 1091282 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 19:10:10.750289 1091282 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 19:10:10.750298 1091282 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 19:10:10.750308 1091282 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 19:10:10.750317 1091282 command_runner.go:130] > # will be added.
	I0729 19:10:10.750324 1091282 command_runner.go:130] > # default_capabilities = [
	I0729 19:10:10.750333 1091282 command_runner.go:130] > # 	"CHOWN",
	I0729 19:10:10.750340 1091282 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 19:10:10.750348 1091282 command_runner.go:130] > # 	"FSETID",
	I0729 19:10:10.750355 1091282 command_runner.go:130] > # 	"FOWNER",
	I0729 19:10:10.750364 1091282 command_runner.go:130] > # 	"SETGID",
	I0729 19:10:10.750370 1091282 command_runner.go:130] > # 	"SETUID",
	I0729 19:10:10.750379 1091282 command_runner.go:130] > # 	"SETPCAP",
	I0729 19:10:10.750386 1091282 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 19:10:10.750394 1091282 command_runner.go:130] > # 	"KILL",
	I0729 19:10:10.750398 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750404 1091282 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 19:10:10.750413 1091282 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 19:10:10.750418 1091282 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 19:10:10.750426 1091282 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 19:10:10.750432 1091282 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 19:10:10.750437 1091282 command_runner.go:130] > default_sysctls = [
	I0729 19:10:10.750443 1091282 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 19:10:10.750451 1091282 command_runner.go:130] > ]
	I0729 19:10:10.750458 1091282 command_runner.go:130] > # List of devices on the host that a
	I0729 19:10:10.750471 1091282 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 19:10:10.750478 1091282 command_runner.go:130] > # allowed_devices = [
	I0729 19:10:10.750485 1091282 command_runner.go:130] > # 	"/dev/fuse",
	I0729 19:10:10.750490 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750501 1091282 command_runner.go:130] > # List of additional devices. specified as
	I0729 19:10:10.750515 1091282 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 19:10:10.750526 1091282 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 19:10:10.750534 1091282 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 19:10:10.750543 1091282 command_runner.go:130] > # additional_devices = [
	I0729 19:10:10.750551 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750561 1091282 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 19:10:10.750572 1091282 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 19:10:10.750581 1091282 command_runner.go:130] > # 	"/etc/cdi",
	I0729 19:10:10.750588 1091282 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 19:10:10.750595 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750615 1091282 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 19:10:10.750626 1091282 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 19:10:10.750635 1091282 command_runner.go:130] > # Defaults to false.
	I0729 19:10:10.750643 1091282 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 19:10:10.750656 1091282 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 19:10:10.750664 1091282 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 19:10:10.750670 1091282 command_runner.go:130] > # hooks_dir = [
	I0729 19:10:10.750680 1091282 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 19:10:10.750689 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.750700 1091282 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 19:10:10.750713 1091282 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 19:10:10.750724 1091282 command_runner.go:130] > # its default mounts from the following two files:
	I0729 19:10:10.750731 1091282 command_runner.go:130] > #
	I0729 19:10:10.750741 1091282 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 19:10:10.750753 1091282 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 19:10:10.750764 1091282 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 19:10:10.750773 1091282 command_runner.go:130] > #
	I0729 19:10:10.750783 1091282 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 19:10:10.750796 1091282 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 19:10:10.750808 1091282 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 19:10:10.750819 1091282 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 19:10:10.750827 1091282 command_runner.go:130] > #
	I0729 19:10:10.750834 1091282 command_runner.go:130] > # default_mounts_file = ""
	I0729 19:10:10.750856 1091282 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 19:10:10.750871 1091282 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 19:10:10.750881 1091282 command_runner.go:130] > pids_limit = 1024
	I0729 19:10:10.750891 1091282 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 19:10:10.750903 1091282 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 19:10:10.750916 1091282 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 19:10:10.750931 1091282 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 19:10:10.750940 1091282 command_runner.go:130] > # log_size_max = -1
	I0729 19:10:10.750951 1091282 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 19:10:10.750961 1091282 command_runner.go:130] > # log_to_journald = false
	I0729 19:10:10.750970 1091282 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 19:10:10.750980 1091282 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 19:10:10.750991 1091282 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 19:10:10.751003 1091282 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 19:10:10.751012 1091282 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 19:10:10.751021 1091282 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 19:10:10.751029 1091282 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 19:10:10.751038 1091282 command_runner.go:130] > # read_only = false
	I0729 19:10:10.751047 1091282 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 19:10:10.751062 1091282 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 19:10:10.751070 1091282 command_runner.go:130] > # live configuration reload.
	I0729 19:10:10.751079 1091282 command_runner.go:130] > # log_level = "info"
	I0729 19:10:10.751088 1091282 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 19:10:10.751098 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.751107 1091282 command_runner.go:130] > # log_filter = ""
	I0729 19:10:10.751122 1091282 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 19:10:10.751134 1091282 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 19:10:10.751143 1091282 command_runner.go:130] > # separated by comma.
	I0729 19:10:10.751155 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751164 1091282 command_runner.go:130] > # uid_mappings = ""
	I0729 19:10:10.751170 1091282 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 19:10:10.751177 1091282 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 19:10:10.751181 1091282 command_runner.go:130] > # separated by comma.
	I0729 19:10:10.751188 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751194 1091282 command_runner.go:130] > # gid_mappings = ""
	I0729 19:10:10.751199 1091282 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 19:10:10.751206 1091282 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 19:10:10.751212 1091282 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 19:10:10.751222 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751228 1091282 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 19:10:10.751237 1091282 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 19:10:10.751250 1091282 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 19:10:10.751263 1091282 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 19:10:10.751274 1091282 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 19:10:10.751283 1091282 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 19:10:10.751293 1091282 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 19:10:10.751305 1091282 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 19:10:10.751317 1091282 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 19:10:10.751330 1091282 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 19:10:10.751340 1091282 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 19:10:10.751353 1091282 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 19:10:10.751364 1091282 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 19:10:10.751371 1091282 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 19:10:10.751380 1091282 command_runner.go:130] > drop_infra_ctr = false
	I0729 19:10:10.751390 1091282 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 19:10:10.751402 1091282 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 19:10:10.751416 1091282 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 19:10:10.751425 1091282 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 19:10:10.751436 1091282 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 19:10:10.751448 1091282 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 19:10:10.751459 1091282 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 19:10:10.751467 1091282 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 19:10:10.751477 1091282 command_runner.go:130] > # shared_cpuset = ""
	I0729 19:10:10.751486 1091282 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 19:10:10.751497 1091282 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 19:10:10.751506 1091282 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 19:10:10.751517 1091282 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 19:10:10.751527 1091282 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 19:10:10.751536 1091282 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 19:10:10.751548 1091282 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 19:10:10.751558 1091282 command_runner.go:130] > # enable_criu_support = false
	I0729 19:10:10.751566 1091282 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 19:10:10.751577 1091282 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 19:10:10.751586 1091282 command_runner.go:130] > # enable_pod_events = false
	I0729 19:10:10.751605 1091282 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 19:10:10.751613 1091282 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 19:10:10.751619 1091282 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 19:10:10.751625 1091282 command_runner.go:130] > # default_runtime = "runc"
	I0729 19:10:10.751630 1091282 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 19:10:10.751639 1091282 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 19:10:10.751648 1091282 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 19:10:10.751655 1091282 command_runner.go:130] > # creation as a file is not desired either.
	I0729 19:10:10.751663 1091282 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 19:10:10.751670 1091282 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 19:10:10.751678 1091282 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 19:10:10.751685 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.751697 1091282 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 19:10:10.751710 1091282 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 19:10:10.751723 1091282 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 19:10:10.751732 1091282 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 19:10:10.751740 1091282 command_runner.go:130] > #
	I0729 19:10:10.751747 1091282 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 19:10:10.751758 1091282 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 19:10:10.751783 1091282 command_runner.go:130] > # runtime_type = "oci"
	I0729 19:10:10.751796 1091282 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 19:10:10.751804 1091282 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 19:10:10.751811 1091282 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 19:10:10.751818 1091282 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 19:10:10.751823 1091282 command_runner.go:130] > # monitor_env = []
	I0729 19:10:10.751831 1091282 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 19:10:10.751839 1091282 command_runner.go:130] > # allowed_annotations = []
	I0729 19:10:10.751847 1091282 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 19:10:10.751855 1091282 command_runner.go:130] > # Where:
	I0729 19:10:10.751863 1091282 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 19:10:10.751876 1091282 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 19:10:10.751887 1091282 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 19:10:10.751899 1091282 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 19:10:10.751909 1091282 command_runner.go:130] > #   in $PATH.
	I0729 19:10:10.751922 1091282 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 19:10:10.751930 1091282 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 19:10:10.751944 1091282 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 19:10:10.751953 1091282 command_runner.go:130] > #   state.
	I0729 19:10:10.751963 1091282 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 19:10:10.751976 1091282 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 19:10:10.751988 1091282 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 19:10:10.751997 1091282 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 19:10:10.752003 1091282 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 19:10:10.752015 1091282 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 19:10:10.752026 1091282 command_runner.go:130] > #   The currently recognized values are:
	I0729 19:10:10.752037 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 19:10:10.752051 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 19:10:10.752067 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 19:10:10.752078 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 19:10:10.752091 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 19:10:10.752104 1091282 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 19:10:10.752117 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 19:10:10.752130 1091282 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 19:10:10.752141 1091282 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 19:10:10.752152 1091282 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 19:10:10.752161 1091282 command_runner.go:130] > #   deprecated option "conmon".
	I0729 19:10:10.752172 1091282 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 19:10:10.752182 1091282 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 19:10:10.752195 1091282 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 19:10:10.752206 1091282 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 19:10:10.752218 1091282 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 19:10:10.752228 1091282 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 19:10:10.752240 1091282 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 19:10:10.752254 1091282 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 19:10:10.752259 1091282 command_runner.go:130] > #
	I0729 19:10:10.752270 1091282 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 19:10:10.752279 1091282 command_runner.go:130] > #
	I0729 19:10:10.752289 1091282 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 19:10:10.752301 1091282 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 19:10:10.752309 1091282 command_runner.go:130] > #
	I0729 19:10:10.752317 1091282 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 19:10:10.752336 1091282 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 19:10:10.752346 1091282 command_runner.go:130] > #
	I0729 19:10:10.752357 1091282 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 19:10:10.752365 1091282 command_runner.go:130] > # feature.
	I0729 19:10:10.752370 1091282 command_runner.go:130] > #
	I0729 19:10:10.752381 1091282 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 19:10:10.752391 1091282 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 19:10:10.752398 1091282 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 19:10:10.752409 1091282 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 19:10:10.752420 1091282 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 19:10:10.752427 1091282 command_runner.go:130] > #
	I0729 19:10:10.752436 1091282 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 19:10:10.752456 1091282 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 19:10:10.752463 1091282 command_runner.go:130] > #
	I0729 19:10:10.752473 1091282 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 19:10:10.752484 1091282 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 19:10:10.752493 1091282 command_runner.go:130] > #
	I0729 19:10:10.752502 1091282 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 19:10:10.752515 1091282 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 19:10:10.752521 1091282 command_runner.go:130] > # limitation.
	I0729 19:10:10.752532 1091282 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 19:10:10.752539 1091282 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 19:10:10.752548 1091282 command_runner.go:130] > runtime_type = "oci"
	I0729 19:10:10.752555 1091282 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 19:10:10.752564 1091282 command_runner.go:130] > runtime_config_path = ""
	I0729 19:10:10.752574 1091282 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 19:10:10.752580 1091282 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 19:10:10.752585 1091282 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 19:10:10.752591 1091282 command_runner.go:130] > monitor_env = [
	I0729 19:10:10.752603 1091282 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 19:10:10.752612 1091282 command_runner.go:130] > ]
	I0729 19:10:10.752620 1091282 command_runner.go:130] > privileged_without_host_devices = false
	I0729 19:10:10.752631 1091282 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 19:10:10.752642 1091282 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 19:10:10.752655 1091282 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 19:10:10.752669 1091282 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 19:10:10.752683 1091282 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 19:10:10.752692 1091282 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 19:10:10.752709 1091282 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 19:10:10.752724 1091282 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 19:10:10.752737 1091282 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 19:10:10.752748 1091282 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 19:10:10.752754 1091282 command_runner.go:130] > # Example:
	I0729 19:10:10.752761 1091282 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 19:10:10.752773 1091282 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 19:10:10.752781 1091282 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 19:10:10.752788 1091282 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 19:10:10.752792 1091282 command_runner.go:130] > # cpuset = 0
	I0729 19:10:10.752796 1091282 command_runner.go:130] > # cpushares = "0-1"
	I0729 19:10:10.752801 1091282 command_runner.go:130] > # Where:
	I0729 19:10:10.752811 1091282 command_runner.go:130] > # The workload name is workload-type.
	I0729 19:10:10.752823 1091282 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 19:10:10.752832 1091282 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 19:10:10.752841 1091282 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 19:10:10.752853 1091282 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 19:10:10.752862 1091282 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 19:10:10.752870 1091282 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 19:10:10.752878 1091282 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 19:10:10.752882 1091282 command_runner.go:130] > # Default value is set to true
	I0729 19:10:10.752888 1091282 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 19:10:10.752896 1091282 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 19:10:10.752904 1091282 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 19:10:10.752911 1091282 command_runner.go:130] > # Default value is set to 'false'
	I0729 19:10:10.752918 1091282 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 19:10:10.752928 1091282 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 19:10:10.752936 1091282 command_runner.go:130] > #
	I0729 19:10:10.752945 1091282 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 19:10:10.752957 1091282 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 19:10:10.752966 1091282 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 19:10:10.752978 1091282 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 19:10:10.752990 1091282 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 19:10:10.752999 1091282 command_runner.go:130] > [crio.image]
	I0729 19:10:10.753012 1091282 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 19:10:10.753024 1091282 command_runner.go:130] > # default_transport = "docker://"
	I0729 19:10:10.753036 1091282 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 19:10:10.753049 1091282 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 19:10:10.753057 1091282 command_runner.go:130] > # global_auth_file = ""
	I0729 19:10:10.753062 1091282 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 19:10:10.753072 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.753084 1091282 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 19:10:10.753097 1091282 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 19:10:10.753109 1091282 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 19:10:10.753120 1091282 command_runner.go:130] > # This option supports live configuration reload.
	I0729 19:10:10.753130 1091282 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 19:10:10.753141 1091282 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 19:10:10.753151 1091282 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 19:10:10.753169 1091282 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 19:10:10.753182 1091282 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 19:10:10.753192 1091282 command_runner.go:130] > # pause_command = "/pause"
	I0729 19:10:10.753204 1091282 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 19:10:10.753217 1091282 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 19:10:10.753229 1091282 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 19:10:10.753241 1091282 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 19:10:10.753249 1091282 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 19:10:10.753259 1091282 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 19:10:10.753269 1091282 command_runner.go:130] > # pinned_images = [
	I0729 19:10:10.753277 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753289 1091282 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 19:10:10.753302 1091282 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 19:10:10.753313 1091282 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 19:10:10.753325 1091282 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 19:10:10.753333 1091282 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 19:10:10.753342 1091282 command_runner.go:130] > # signature_policy = ""
	I0729 19:10:10.753353 1091282 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 19:10:10.753367 1091282 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 19:10:10.753380 1091282 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 19:10:10.753397 1091282 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 19:10:10.753409 1091282 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 19:10:10.753420 1091282 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 19:10:10.753432 1091282 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 19:10:10.753443 1091282 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 19:10:10.753452 1091282 command_runner.go:130] > # changing them here.
	I0729 19:10:10.753462 1091282 command_runner.go:130] > # insecure_registries = [
	I0729 19:10:10.753467 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753480 1091282 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 19:10:10.753491 1091282 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 19:10:10.753501 1091282 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 19:10:10.753511 1091282 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 19:10:10.753521 1091282 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 19:10:10.753531 1091282 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 19:10:10.753537 1091282 command_runner.go:130] > # CNI plugins.
	I0729 19:10:10.753542 1091282 command_runner.go:130] > [crio.network]
	I0729 19:10:10.753555 1091282 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 19:10:10.753570 1091282 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 19:10:10.753580 1091282 command_runner.go:130] > # cni_default_network = ""
	I0729 19:10:10.753590 1091282 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 19:10:10.753604 1091282 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 19:10:10.753615 1091282 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 19:10:10.753625 1091282 command_runner.go:130] > # plugin_dirs = [
	I0729 19:10:10.753631 1091282 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 19:10:10.753635 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753640 1091282 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 19:10:10.753649 1091282 command_runner.go:130] > [crio.metrics]
	I0729 19:10:10.753657 1091282 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 19:10:10.753667 1091282 command_runner.go:130] > enable_metrics = true
	I0729 19:10:10.753674 1091282 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 19:10:10.753685 1091282 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 19:10:10.753698 1091282 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 19:10:10.753710 1091282 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 19:10:10.753722 1091282 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 19:10:10.753732 1091282 command_runner.go:130] > # metrics_collectors = [
	I0729 19:10:10.753738 1091282 command_runner.go:130] > # 	"operations",
	I0729 19:10:10.753743 1091282 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 19:10:10.753752 1091282 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 19:10:10.753764 1091282 command_runner.go:130] > # 	"operations_errors",
	I0729 19:10:10.753774 1091282 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 19:10:10.753784 1091282 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 19:10:10.753794 1091282 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 19:10:10.753804 1091282 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 19:10:10.753814 1091282 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 19:10:10.753821 1091282 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 19:10:10.753828 1091282 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 19:10:10.753832 1091282 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 19:10:10.753841 1091282 command_runner.go:130] > # 	"containers_oom_total",
	I0729 19:10:10.753848 1091282 command_runner.go:130] > # 	"containers_oom",
	I0729 19:10:10.753858 1091282 command_runner.go:130] > # 	"processes_defunct",
	I0729 19:10:10.753864 1091282 command_runner.go:130] > # 	"operations_total",
	I0729 19:10:10.753874 1091282 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 19:10:10.753882 1091282 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 19:10:10.753891 1091282 command_runner.go:130] > # 	"operations_errors_total",
	I0729 19:10:10.753899 1091282 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 19:10:10.753908 1091282 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 19:10:10.753915 1091282 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 19:10:10.753924 1091282 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 19:10:10.753929 1091282 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 19:10:10.753937 1091282 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 19:10:10.753945 1091282 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 19:10:10.753955 1091282 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 19:10:10.753963 1091282 command_runner.go:130] > # ]
	I0729 19:10:10.753973 1091282 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 19:10:10.753982 1091282 command_runner.go:130] > # metrics_port = 9090
	I0729 19:10:10.753992 1091282 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 19:10:10.754001 1091282 command_runner.go:130] > # metrics_socket = ""
	I0729 19:10:10.754009 1091282 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 19:10:10.754021 1091282 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 19:10:10.754033 1091282 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 19:10:10.754043 1091282 command_runner.go:130] > # certificate on any modification event.
	I0729 19:10:10.754052 1091282 command_runner.go:130] > # metrics_cert = ""
	I0729 19:10:10.754064 1091282 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 19:10:10.754075 1091282 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 19:10:10.754083 1091282 command_runner.go:130] > # metrics_key = ""
	I0729 19:10:10.754095 1091282 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 19:10:10.754104 1091282 command_runner.go:130] > [crio.tracing]
	I0729 19:10:10.754110 1091282 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 19:10:10.754116 1091282 command_runner.go:130] > # enable_tracing = false
	I0729 19:10:10.754121 1091282 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 19:10:10.754129 1091282 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 19:10:10.754135 1091282 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 19:10:10.754140 1091282 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 19:10:10.754145 1091282 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 19:10:10.754150 1091282 command_runner.go:130] > [crio.nri]
	I0729 19:10:10.754155 1091282 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 19:10:10.754159 1091282 command_runner.go:130] > # enable_nri = false
	I0729 19:10:10.754163 1091282 command_runner.go:130] > # NRI socket to listen on.
	I0729 19:10:10.754167 1091282 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 19:10:10.754173 1091282 command_runner.go:130] > # NRI plugin directory to use.
	I0729 19:10:10.754178 1091282 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 19:10:10.754183 1091282 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 19:10:10.754187 1091282 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 19:10:10.754194 1091282 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 19:10:10.754199 1091282 command_runner.go:130] > # nri_disable_connections = false
	I0729 19:10:10.754206 1091282 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 19:10:10.754210 1091282 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 19:10:10.754218 1091282 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 19:10:10.754222 1091282 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 19:10:10.754230 1091282 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 19:10:10.754234 1091282 command_runner.go:130] > [crio.stats]
	I0729 19:10:10.754239 1091282 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 19:10:10.754246 1091282 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 19:10:10.754250 1091282 command_runner.go:130] > # stats_collection_period = 0
	I0729 19:10:10.754283 1091282 command_runner.go:130] ! time="2024-07-29 19:10:10.712761984Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 19:10:10.754299 1091282 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 19:10:10.754418 1091282 cni.go:84] Creating CNI manager for ""
	I0729 19:10:10.754426 1091282 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 19:10:10.754436 1091282 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:10:10.754471 1091282 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.180 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-370772 NodeName:multinode-370772 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.180"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.180 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:10:10.754592 1091282 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.180
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-370772"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.180
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.180"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:10:10.754660 1091282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:10:10.765285 1091282 command_runner.go:130] > kubeadm
	I0729 19:10:10.765300 1091282 command_runner.go:130] > kubectl
	I0729 19:10:10.765304 1091282 command_runner.go:130] > kubelet
	I0729 19:10:10.765322 1091282 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:10:10.765388 1091282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:10:10.775120 1091282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 19:10:10.792144 1091282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:10:10.808790 1091282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 19:10:10.825457 1091282 ssh_runner.go:195] Run: grep 192.168.39.180	control-plane.minikube.internal$ /etc/hosts
	I0729 19:10:10.829618 1091282 command_runner.go:130] > 192.168.39.180	control-plane.minikube.internal
	I0729 19:10:10.829698 1091282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:10:10.966511 1091282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:10:10.982251 1091282 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772 for IP: 192.168.39.180
	I0729 19:10:10.982283 1091282 certs.go:194] generating shared ca certs ...
	I0729 19:10:10.982305 1091282 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:10:10.982513 1091282 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:10:10.982584 1091282 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:10:10.982604 1091282 certs.go:256] generating profile certs ...
	I0729 19:10:10.982726 1091282 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/client.key
	I0729 19:10:10.982802 1091282 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key.86ff478d
	I0729 19:10:10.982840 1091282 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key
	I0729 19:10:10.982871 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 19:10:10.982895 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 19:10:10.982913 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 19:10:10.982930 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 19:10:10.982947 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 19:10:10.982961 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 19:10:10.982990 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 19:10:10.983012 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 19:10:10.983082 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:10:10.983114 1091282 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:10:10.983123 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:10:10.983144 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:10:10.983167 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:10:10.983191 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:10:10.983229 1091282 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:10:10.983255 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:10.983268 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem -> /usr/share/ca-certificates/1062272.pem
	I0729 19:10:10.983283 1091282 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> /usr/share/ca-certificates/10622722.pem
	I0729 19:10:10.983953 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:10:11.007668 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:10:11.031040 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:10:11.053953 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:10:11.077660 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:10:11.101599 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:10:11.124480 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:10:11.148926 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/multinode-370772/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:10:11.173286 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:10:11.196133 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:10:11.219306 1091282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:10:11.242710 1091282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:10:11.259031 1091282 ssh_runner.go:195] Run: openssl version
	I0729 19:10:11.265245 1091282 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 19:10:11.265463 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:10:11.276411 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280722 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280918 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.280970 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:10:11.286405 1091282 command_runner.go:130] > b5213941
	I0729 19:10:11.286601 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:10:11.295475 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:10:11.305627 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309810 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309865 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.309906 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:10:11.315057 1091282 command_runner.go:130] > 51391683
	I0729 19:10:11.315267 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:10:11.324346 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:10:11.334545 1091282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338578 1091282 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338677 1091282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.338714 1091282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:10:11.343946 1091282 command_runner.go:130] > 3ec20f2e
	I0729 19:10:11.344122 1091282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:10:11.353492 1091282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:10:11.357992 1091282 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:10:11.358010 1091282 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 19:10:11.358019 1091282 command_runner.go:130] > Device: 253,1	Inode: 1056811     Links: 1
	I0729 19:10:11.358029 1091282 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 19:10:11.358038 1091282 command_runner.go:130] > Access: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358047 1091282 command_runner.go:130] > Modify: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358060 1091282 command_runner.go:130] > Change: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358068 1091282 command_runner.go:130] >  Birth: 2024-07-29 19:03:20.928284301 +0000
	I0729 19:10:11.358116 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:10:11.364132 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.364258 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:10:11.370257 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.370322 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:10:11.376088 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.376319 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:10:11.381994 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.382191 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:10:11.387878 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.388069 1091282 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:10:11.393955 1091282 command_runner.go:130] > Certificate will not expire
	I0729 19:10:11.394021 1091282 kubeadm.go:392] StartCluster: {Name:multinode-370772 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-370772 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.180 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.127 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:10:11.394131 1091282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:10:11.394177 1091282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:10:11.439112 1091282 command_runner.go:130] > 4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073
	I0729 19:10:11.439188 1091282 command_runner.go:130] > 407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae
	I0729 19:10:11.439200 1091282 command_runner.go:130] > b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6
	I0729 19:10:11.439301 1091282 command_runner.go:130] > 4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582
	I0729 19:10:11.439318 1091282 command_runner.go:130] > 96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce
	I0729 19:10:11.439324 1091282 command_runner.go:130] > 32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a
	I0729 19:10:11.439340 1091282 command_runner.go:130] > 30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3
	I0729 19:10:11.439388 1091282 command_runner.go:130] > 27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8
	I0729 19:10:11.441034 1091282 cri.go:89] found id: "4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073"
	I0729 19:10:11.441051 1091282 cri.go:89] found id: "407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae"
	I0729 19:10:11.441055 1091282 cri.go:89] found id: "b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6"
	I0729 19:10:11.441058 1091282 cri.go:89] found id: "4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582"
	I0729 19:10:11.441060 1091282 cri.go:89] found id: "96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce"
	I0729 19:10:11.441063 1091282 cri.go:89] found id: "32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a"
	I0729 19:10:11.441066 1091282 cri.go:89] found id: "30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3"
	I0729 19:10:11.441068 1091282 cri.go:89] found id: "27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8"
	I0729 19:10:11.441071 1091282 cri.go:89] found id: ""
	I0729 19:10:11.441110 1091282 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.847171491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280458847150949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a286ec8-9c99-4b8b-97cb-947234172c4f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.848165305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2fb358b-d754-4ecf-9436-6a2ba2d82226 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.848345513Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2fb358b-d754-4ecf-9436-6a2ba2d82226 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.848685391Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2fb358b-d754-4ecf-9436-6a2ba2d82226 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.888874155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4279405c-6c71-4eef-9461-901c1c9e552f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.888960203Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4279405c-6c71-4eef-9461-901c1c9e552f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.890368394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48824b17-0588-4c6e-bc49-de174de1bc55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.890769412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280458890749172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48824b17-0588-4c6e-bc49-de174de1bc55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.891349936Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90735f2e-2372-4f3e-9f58-f592989408ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.891400285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90735f2e-2372-4f3e-9f58-f592989408ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.891901600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90735f2e-2372-4f3e-9f58-f592989408ed name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.933315001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3ac58e0f-793b-4e57-b290-149b27885ecb name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.933393250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ac58e0f-793b-4e57-b290-149b27885ecb name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.934738509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=99a9dc98-f269-4faf-abee-30405bf17f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.935169319Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280458935148854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99a9dc98-f269-4faf-abee-30405bf17f81 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.935728581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59c3768b-bbd4-47cb-aeed-ffa2dfeef214 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.935787479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59c3768b-bbd4-47cb-aeed-ffa2dfeef214 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.936110200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59c3768b-bbd4-47cb-aeed-ffa2dfeef214 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.980658843Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8baf7cc6-7694-40eb-a2a0-5c21ed276e39 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.980740764Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8baf7cc6-7694-40eb-a2a0-5c21ed276e39 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.981793575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e53b1b9-7072-46d3-ba24-e5de895b4144 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.982210942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722280458982190036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e53b1b9-7072-46d3-ba24-e5de895b4144 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.983022087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cb5d1b6-82e8-49d1-9df5-4c0e66425684 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.983088982Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cb5d1b6-82e8-49d1-9df5-4c0e66425684 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:14:18 multinode-370772 crio[2866]: time="2024-07-29 19:14:18.983465152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fabe309783f98159fe460daff1940bf9e0f3b977a15561611591722b05bc2ed,PodSandboxId:9cf42e004106e89439e4d7d5beb73e4b282f51b9fcce0ce03d7dedc29f348459,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722280252315130140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470,PodSandboxId:82e697a6c2c5ac4a6f63d91a24d595605b2dd5152e5a99f54d387a08e287b995,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722280218808988025,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08,PodSandboxId:a8280acb8804e60121a33fb95b84725d4fbdac1c0fb469003e6933ebcfed6d5a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722280218699161220,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc1
5e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32e4808743b1dbd780808545426c81bd19bae1c6b2a7e9b3839323b61e599e6f,PodSandboxId:0cd46ada9baca68a96c79ecaad4c017a438d14da23812b62b321e89995f8fcd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722280218579814263,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},An
notations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b,PodSandboxId:796e1654a90c85809a975bda393e42a89ded74847a6b76bb8b79b43c40b68f17,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722280218496003752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.ku
bernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33,PodSandboxId:154a76a952d577fe1f9811e848861bc4366f192bb4a218b159eb75af65cf470d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722280213761219611,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94cf,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65,PodSandboxId:55dd1b5e3f405b26fe47b416e8a972b662a5aa2c8b85e90af8fb72d16b9d6ce4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722280213733897642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5,PodSandboxId:972bd1bfab27db0cfd2aa196f6f95e490f9b921577492b4af92e01a47ce6e23d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722280213757979202,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18,PodSandboxId:8aadf1c9db855105a6530f15003180702f95dcbab503afb9095377c2029466b1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722280213670203398,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f7443136a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034df8dc5d4e74d0ed03817490f2186d5aa22aee5923c82cbb0cf221ee25cdec,PodSandboxId:a1a136d7b4b8b7c06deb1a0fa6aeadaa909aa3ab0b02400ccb603cd81c600632,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722279888451509637,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6l2ht,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 35fbaee9-23c6-47ce-9b54-e6e523cda069,},Annotations:map[string]string{io.kubernetes.container.hash: c3472d00,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4babe0c565be1e6fa8480f9fc9753ee74bc23f85a683300df83c8ece2f828073,PodSandboxId:09d1493d303473b6fcd525b0df2c9efd27c99887f00eb574643ac4cec2bcab57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722279838726166902,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de00f063-7d28-45e2-aa3a-39b8e8084dc8,},Annotations:map[string]string{io.kubernetes.container.hash: 83ae6e6f,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae,PodSandboxId:3f895614c86f93e95757a412a84f83f46d817c14c153e238f8f2cae1471bd057,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722279838720414652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nz959,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1040ab-4ee3-42dc-8a86-9ecd40578a48,},Annotations:map[string]string{io.kubernetes.container.hash: cf6509a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6,PodSandboxId:9b7ed6edaf967d1da623dc9ac4cbcffd316e512d002cf38759fba27766280708,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722279826711913000,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-h6x45,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 4a210787-5503-4b35-899f-53cc15e43d4b,},Annotations:map[string]string{io.kubernetes.container.hash: 689d770d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582,PodSandboxId:98dd4aa9e2c91423ca93c27227ef25e88226ac8e8426a56f7da5f3d117ce6419,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722279824599647318,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zzfbl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 98b96d50-7bc4-4e38-a093-ee0d26a7db01,},Annotations:map[string]string{io.kubernetes.container.hash: b7134565,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce,PodSandboxId:622e655ecebe1b9e3235d977f3e384dc3da5a89a0992f8a148978aa3fc3084cf,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722279804776306303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61352aa6e536e34fe2ff2b41c58d94c
f,},Annotations:map[string]string{io.kubernetes.container.hash: 71745cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a,PodSandboxId:55d1c89e7e7475a0597776e0059eac7854e219845d98585a5aefcaebed0033dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722279804751166718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3e8e3fb96e74f74431
36a2dbdb1f0e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3,PodSandboxId:a96b2f02f21bc44eb5c2d491a6a282426237c067787d5169367f1458b0afab45,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722279804717976373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7d52bca566a2be556cde5910d0fc25c,},
Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8,PodSandboxId:f57101c3bec9b82d3d13b707350538b88589bd69a5d2175541dc98d4a61a07d4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722279804698597191,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-370772,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df8c45d744808cffd58039a2da77666e,},Annotations:map
[string]string{io.kubernetes.container.hash: d6fd6d3b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cb5d1b6-82e8-49d1-9df5-4c0e66425684 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9fabe309783f9       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9cf42e004106e       busybox-fc5497c4f-6l2ht
	d5db30bce927d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   82e697a6c2c5a       coredns-7db6d8ff4d-nz959
	268098a6ccd0d       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   a8280acb8804e       kindnet-h6x45
	32e4808743b1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   0cd46ada9baca       storage-provisioner
	da78244f3f52e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   796e1654a90c8       kube-proxy-zzfbl
	765c9a17f9f9c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   154a76a952d57       etcd-multinode-370772
	e12bd9484ce29       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   972bd1bfab27d       kube-scheduler-multinode-370772
	ea775c68cb9d2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   55dd1b5e3f405       kube-apiserver-multinode-370772
	f59a71174cf94       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   8aadf1c9db855       kube-controller-manager-multinode-370772
	034df8dc5d4e7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a1a136d7b4b8b       busybox-fc5497c4f-6l2ht
	4babe0c565be1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   09d1493d30347       storage-provisioner
	407c792dec105       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   3f895614c86f9       coredns-7db6d8ff4d-nz959
	b166de409b402       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   9b7ed6edaf967       kindnet-h6x45
	4944b9573bde7       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   98dd4aa9e2c91       kube-proxy-zzfbl
	96d2ade2d0aaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   622e655ecebe1       etcd-multinode-370772
	32d758bd7641c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   55d1c89e7e747       kube-controller-manager-multinode-370772
	30a85e5c7caf0       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   a96b2f02f21bc       kube-scheduler-multinode-370772
	27450337c36d2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   f57101c3bec9b       kube-apiserver-multinode-370772
	
	
	==> coredns [407c792dec1054059ff64a06121e558e84cc492420f3c66d9e8a80fa848020ae] <==
	[INFO] 10.244.1.2:41068 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001529988s
	[INFO] 10.244.1.2:55196 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139147s
	[INFO] 10.244.1.2:44299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075804s
	[INFO] 10.244.1.2:58773 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001166902s
	[INFO] 10.244.1.2:58285 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149462s
	[INFO] 10.244.1.2:49655 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077238s
	[INFO] 10.244.1.2:58613 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000119513s
	[INFO] 10.244.0.3:51408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144014s
	[INFO] 10.244.0.3:38041 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097912s
	[INFO] 10.244.0.3:48464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062721s
	[INFO] 10.244.0.3:37512 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111129s
	[INFO] 10.244.1.2:35062 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140156s
	[INFO] 10.244.1.2:55518 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000117604s
	[INFO] 10.244.1.2:56074 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087961s
	[INFO] 10.244.1.2:55503 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009786s
	[INFO] 10.244.0.3:52955 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000095643s
	[INFO] 10.244.0.3:37712 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000120793s
	[INFO] 10.244.0.3:36162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064248s
	[INFO] 10.244.0.3:39705 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082967s
	[INFO] 10.244.1.2:49699 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120284s
	[INFO] 10.244.1.2:46867 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009324s
	[INFO] 10.244.1.2:49125 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091653s
	[INFO] 10.244.1.2:46593 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000073597s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d5db30bce927d2bf819cbd670b88f4d1dff2155a3359b62800f301889e856470] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54134 - 61304 "HINFO IN 5850425017373081415.7069960909407928461. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010506641s
	
	
	==> describe nodes <==
	Name:               multinode-370772
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370772
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=multinode-370772
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_03_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370772
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:14:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:10:17 +0000   Mon, 29 Jul 2024 19:03:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.180
	  Hostname:    multinode-370772
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f1e7655c85e424c98f2c0316ed4fc96
	  System UUID:                2f1e7655-c85e-424c-98f2-c0316ed4fc96
	  Boot ID:                    40af5d61-b051-4ec0-89e6-77a27c6cf00f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6l2ht                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m32s
	  kube-system                 coredns-7db6d8ff4d-nz959                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-370772                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-h6x45                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-370772             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-370772    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-zzfbl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-370772             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-370772 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-370772 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-370772 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-370772 event: Registered Node multinode-370772 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-370772 status is now: NodeReady
	  Normal  Starting                 4m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-370772 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-370772 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-370772 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-370772 event: Registered Node multinode-370772 in Controller
	
	
	Name:               multinode-370772-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-370772-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=multinode-370772
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T19_10_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:10:57 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-370772-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:11:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:12:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:12:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:12:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 19:11:28 +0000   Mon, 29 Jul 2024 19:12:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    multinode-370772-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 017dc354e7894a138c268cb45894183e
	  System UUID:                017dc354-e789-4a13-8c26-8cb45894183e
	  Boot ID:                    049c8386-8f64-4514-b571-ea81423e0505
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-grv5f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m26s
	  kube-system                 kindnet-txzpl              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m53s
	  kube-system                 kube-proxy-vhc6b           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m17s                  kube-proxy       
	  Normal  Starting                 9m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m53s (x2 over 9m53s)  kubelet          Node multinode-370772-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s (x2 over 9m53s)  kubelet          Node multinode-370772-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s (x2 over 9m53s)  kubelet          Node multinode-370772-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m34s                  kubelet          Node multinode-370772-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node multinode-370772-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node multinode-370772-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node multinode-370772-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m4s                   kubelet          Node multinode-370772-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-370772-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053920] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.181558] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.112290] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.249622] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +4.034721] systemd-fstab-generator[769]: Ignoring "noauto" option for root device
	[  +4.060386] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.060700] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.498371] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.073242] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.624140] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.119350] kauditd_printk_skb: 21 callbacks suppressed
	[ +14.208734] kauditd_printk_skb: 60 callbacks suppressed
	[Jul29 19:04] kauditd_printk_skb: 12 callbacks suppressed
	[Jul29 19:10] systemd-fstab-generator[2784]: Ignoring "noauto" option for root device
	[  +0.152318] systemd-fstab-generator[2796]: Ignoring "noauto" option for root device
	[  +0.200146] systemd-fstab-generator[2810]: Ignoring "noauto" option for root device
	[  +0.153129] systemd-fstab-generator[2822]: Ignoring "noauto" option for root device
	[  +0.305855] systemd-fstab-generator[2852]: Ignoring "noauto" option for root device
	[  +7.551905] systemd-fstab-generator[2952]: Ignoring "noauto" option for root device
	[  +0.082833] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.810222] systemd-fstab-generator[3075]: Ignoring "noauto" option for root device
	[  +5.660343] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.451749] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.253840] systemd-fstab-generator[3917]: Ignoring "noauto" option for root device
	[ +19.141114] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [765c9a17f9f9c845e7c67699aa93befeed4366aca86cf47f51dd6931cda3fb33] <==
	{"level":"info","ts":"2024-07-29T19:10:14.421613Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T19:10:14.425288Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T19:10:14.421529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 switched to configuration voters=(808613133158692504)"}
	{"level":"info","ts":"2024-07-29T19:10:14.42563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","added-peer-id":"b38c55c42a3b698","added-peer-peer-urls":["https://192.168.39.180:2380"]}
	{"level":"info","ts":"2024-07-29T19:10:14.425896Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5a7d3c553a64e690","local-member-id":"b38c55c42a3b698","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:10:14.428402Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:10:14.426913Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:10:14.426935Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:10:14.430415Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:10:14.431478Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b38c55c42a3b698","initial-advertise-peer-urls":["https://192.168.39.180:2380"],"listen-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.180:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:10:14.43153Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:10:15.737398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737454Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgPreVoteResp from b38c55c42a3b698 at term 2"}
	{"level":"info","ts":"2024-07-29T19:10:15.737528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.737533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 received MsgVoteResp from b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.737551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b38c55c42a3b698 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.73758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b38c55c42a3b698 elected leader b38c55c42a3b698 at term 3"}
	{"level":"info","ts":"2024-07-29T19:10:15.742985Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:10:15.742939Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b38c55c42a3b698","local-member-attributes":"{Name:multinode-370772 ClientURLs:[https://192.168.39.180:2379]}","request-path":"/0/members/b38c55c42a3b698/attributes","cluster-id":"5a7d3c553a64e690","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:10:15.744331Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:10:15.744556Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:10:15.744591Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:10:15.745182Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2024-07-29T19:10:15.746224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [96d2ade2d0aaa7df83af0a2a9958d310822e925fee9165d886c937def865afce] <==
	{"level":"info","ts":"2024-07-29T19:03:25.294453Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:03:25.295919Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.180:2379"}
	{"level":"info","ts":"2024-07-29T19:03:25.307289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-29T19:04:26.82617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.005278ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157425740016295527 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-370772-m02.17e6c469e2c71d7a\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-370772-m02.17e6c469e2c71d7a\" value_size:646 lease:3934053703161519156 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:04:26.826464Z","caller":"traceutil/trace.go:171","msg":"trace[1051897117] transaction","detail":"{read_only:false; response_revision:440; number_of_response:1; }","duration":"156.332217ms","start":"2024-07-29T19:04:26.670105Z","end":"2024-07-29T19:04:26.826438Z","steps":["trace[1051897117] 'process raft request'  (duration: 156.286754ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:04:26.826615Z","caller":"traceutil/trace.go:171","msg":"trace[1619266642] linearizableReadLoop","detail":"{readStateIndex:459; appliedIndex:458; }","duration":"231.032561ms","start":"2024-07-29T19:04:26.595546Z","end":"2024-07-29T19:04:26.826579Z","steps":["trace[1619266642] 'read index received'  (duration: 29.987072ms)","trace[1619266642] 'applied index is now lower than readState.Index'  (duration: 201.044031ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:04:26.826686Z","caller":"traceutil/trace.go:171","msg":"trace[2058037069] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"233.726298ms","start":"2024-07-29T19:04:26.592949Z","end":"2024-07-29T19:04:26.826675Z","steps":["trace[2058037069] 'process raft request'  (duration: 32.622288ms)","trace[2058037069] 'compare'  (duration: 199.80316ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:04:26.826995Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.443036ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-370772-m02\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T19:04:26.827044Z","caller":"traceutil/trace.go:171","msg":"trace[1256018404] range","detail":"{range_begin:/registry/minions/multinode-370772-m02; range_end:; response_count:1; response_revision:440; }","duration":"231.526652ms","start":"2024-07-29T19:04:26.595509Z","end":"2024-07-29T19:04:26.827036Z","steps":["trace[1256018404] 'agreement among raft nodes before linearized reading'  (duration: 231.447693ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T19:05:20.084378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.54965ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13157425740016295966 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-370772-m03.17e6c47648f2c9c4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-370772-m03.17e6c47648f2c9c4\" value_size:646 lease:3934053703161519803 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T19:05:20.084579Z","caller":"traceutil/trace.go:171","msg":"trace[799750307] linearizableReadLoop","detail":"{readStateIndex:609; appliedIndex:607; }","duration":"150.096656ms","start":"2024-07-29T19:05:19.934451Z","end":"2024-07-29T19:05:20.084548Z","steps":["trace[799750307] 'read index received'  (duration: 16.311551ms)","trace[799750307] 'applied index is now lower than readState.Index'  (duration: 133.784481ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T19:05:20.084661Z","caller":"traceutil/trace.go:171","msg":"trace[1611532323] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"187.152723ms","start":"2024-07-29T19:05:19.897499Z","end":"2024-07-29T19:05:20.084652Z","steps":["trace[1611532323] 'process raft request'  (duration: 186.993905ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:05:20.084838Z","caller":"traceutil/trace.go:171","msg":"trace[1858634684] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"239.1928ms","start":"2024-07-29T19:05:19.845631Z","end":"2024-07-29T19:05:20.084824Z","steps":["trace[1858634684] 'process raft request'  (duration: 105.123203ms)","trace[1858634684] 'compare'  (duration: 133.379279ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:05:20.084847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.379062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/\" range_end:\"/registry/leases0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-29T19:05:20.085011Z","caller":"traceutil/trace.go:171","msg":"trace[1179851407] range","detail":"{range_begin:/registry/leases/; range_end:/registry/leases0; response_count:0; response_revision:576; }","duration":"150.58537ms","start":"2024-07-29T19:05:19.934417Z","end":"2024-07-29T19:05:20.085003Z","steps":["trace[1179851407] 'agreement among raft nodes before linearized reading'  (duration: 150.347497ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T19:08:31.2264Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T19:08:31.226519Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-370772","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"]}
	{"level":"warn","ts":"2024-07-29T19:08:31.226651Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.226738Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.326611Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.180:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:08:31.326666Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.180:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T19:08:31.326728Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b38c55c42a3b698","current-leader-member-id":"b38c55c42a3b698"}
	{"level":"info","ts":"2024-07-29T19:08:31.328886Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:08:31.329037Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.180:2380"}
	{"level":"info","ts":"2024-07-29T19:08:31.329065Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-370772","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.180:2380"],"advertise-client-urls":["https://192.168.39.180:2379"]}
	
	
	==> kernel <==
	 19:14:19 up 11 min,  0 users,  load average: 0.08, 0.16, 0.09
	Linux multinode-370772 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [268098a6ccd0d51de2ae99f3fd4d621ce76d79d03d18bfdffaea2ab59357fc08] <==
	I0729 19:13:09.830171       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:13:19.822130       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:13:19.822174       1 main.go:299] handling current node
	I0729 19:13:19.822194       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:13:19.822200       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:13:29.826705       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:13:29.826788       1 main.go:299] handling current node
	I0729 19:13:29.826811       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:13:29.826819       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:13:39.821653       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:13:39.821813       1 main.go:299] handling current node
	I0729 19:13:39.821854       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:13:39.821860       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:13:49.822326       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:13:49.822616       1 main.go:299] handling current node
	I0729 19:13:49.822658       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:13:49.822679       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:13:59.830129       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:13:59.830297       1 main.go:299] handling current node
	I0729 19:13:59.830353       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:13:59.830360       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:14:09.829423       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:14:09.829590       1 main.go:299] handling current node
	I0729 19:14:09.829623       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:14:09.829646       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [b166de409b40287b11b80f1b14461bce3d61644be1a725239c9617ce590910a6] <==
	I0729 19:07:47.922825       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:07:57.923414       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:07:57.923517       1 main.go:299] handling current node
	I0729 19:07:57.923567       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:07:57.923572       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:07:57.923781       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:07:57.923805       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:07.929945       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:07.930095       1 main.go:299] handling current node
	I0729 19:08:07.930140       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:07.930161       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:07.930365       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:07.930392       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:17.931034       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:17.931196       1 main.go:299] handling current node
	I0729 19:08:17.931306       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:17.931397       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:17.931529       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:17.931618       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	I0729 19:08:27.930590       1 main.go:295] Handling node with IPs: map[192.168.39.180:{}]
	I0729 19:08:27.930692       1 main.go:299] handling current node
	I0729 19:08:27.930736       1 main.go:295] Handling node with IPs: map[192.168.39.127:{}]
	I0729 19:08:27.930755       1 main.go:322] Node multinode-370772-m02 has CIDR [10.244.1.0/24] 
	I0729 19:08:27.930920       1 main.go:295] Handling node with IPs: map[192.168.39.8:{}]
	I0729 19:08:27.930978       1 main.go:322] Node multinode-370772-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [27450337c36d2565080914b6b1c2595886eedb69670bf62c8b53a4389b6fc2d8] <==
	E0729 19:04:51.351360       1 conn.go:339] Error on socket receive: read tcp 192.168.39.180:8443->192.168.39.1:55156: use of closed network connection
	I0729 19:08:31.236700       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0729 19:08:31.243945       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244091       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.243927       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244112       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.244157       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.245135       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254841       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254932       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.254970       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255059       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255293       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.255869       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.256937       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258536       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258943       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.258982       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259011       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259041       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259070       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259097       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.259122       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.260305       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0729 19:08:31.260416       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [ea775c68cb9d2023cc148bf66a598c5e8c29175277e5d0c301bf3e038e4c2d65] <==
	I0729 19:10:17.039990       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 19:10:17.041085       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 19:10:17.058998       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 19:10:17.060414       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 19:10:17.060648       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 19:10:17.064275       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 19:10:17.064321       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 19:10:17.064458       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 19:10:17.068302       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:10:17.068345       1 policy_source.go:224] refreshing policies
	I0729 19:10:17.075299       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 19:10:17.076894       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 19:10:17.078060       1 aggregator.go:165] initial CRD sync complete...
	I0729 19:10:17.078160       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 19:10:17.078192       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 19:10:17.078215       1 cache.go:39] Caches are synced for autoregister controller
	E0729 19:10:17.108285       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 19:10:17.944300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 19:10:19.408731       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 19:10:19.524092       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 19:10:19.548205       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 19:10:19.611671       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 19:10:19.617925       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 19:10:29.835760       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 19:10:29.880822       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [32d758bd7641c5a70e44d51b46ecbefa08d470dcf62f4faf7df1c4e156e2c43a] <==
	I0729 19:04:26.844755       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m02" podCIDRs=["10.244.1.0/24"]
	I0729 19:04:28.849704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-370772-m02"
	I0729 19:04:45.050671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:04:47.168364       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.247879ms"
	I0729 19:04:47.184676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.109306ms"
	I0729 19:04:47.185099       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.739µs"
	I0729 19:04:47.185647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.32µs"
	I0729 19:04:47.186304       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.566µs"
	I0729 19:04:48.687736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.750633ms"
	I0729 19:04:48.689520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.569µs"
	I0729 19:04:49.232208       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.86866ms"
	I0729 19:04:49.232327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.272µs"
	I0729 19:05:20.089529       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:05:20.090666       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:05:20.144117       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.2.0/24"]
	I0729 19:05:23.869179       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-370772-m03"
	I0729 19:05:38.495183       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:06.792618       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:07.954712       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:06:07.955523       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:06:07.963283       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.3.0/24"]
	I0729 19:06:25.670788       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m03"
	I0729 19:07:08.924754       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:07:14.015486       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.886458ms"
	I0729 19:07:14.015687       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.628µs"
	
	
	==> kube-controller-manager [f59a71174cf945be2c931a9e645a3105ce1b2581f75dd8a830877c6ac5037a18] <==
	I0729 19:10:57.582618       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m02" podCIDRs=["10.244.1.0/24"]
	I0729 19:10:59.450144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.84µs"
	I0729 19:10:59.471722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.212µs"
	I0729 19:10:59.483622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.756µs"
	I0729 19:10:59.507309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.447µs"
	I0729 19:10:59.513670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.201µs"
	I0729 19:10:59.518833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.387µs"
	I0729 19:11:00.829471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.313µs"
	I0729 19:11:15.451420       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:15.465916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.653µs"
	I0729 19:11:15.478906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.72µs"
	I0729 19:11:17.098309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.931761ms"
	I0729 19:11:17.099833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.505µs"
	I0729 19:11:33.682337       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:34.983851       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-370772-m03\" does not exist"
	I0729 19:11:34.986306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:34.995202       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-370772-m03" podCIDRs=["10.244.2.0/24"]
	I0729 19:11:52.548405       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:11:57.891174       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-370772-m02"
	I0729 19:12:40.203173       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.256198ms"
	I0729 19:12:40.203562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.066µs"
	I0729 19:12:49.907479       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-99pr7"
	I0729 19:12:49.928548       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-99pr7"
	I0729 19:12:49.928748       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9n7cj"
	I0729 19:12:49.947278       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-9n7cj"
	
	
	==> kube-proxy [4944b9573bde78fe5eaf9e5ec0ad98fced5a293c191668dd51995256cd8d3582] <==
	I0729 19:03:45.035417       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:03:45.094930       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
	I0729 19:03:45.318797       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:03:45.318837       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:03:45.318853       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:03:45.323815       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:03:45.324169       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:03:45.324472       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:03:45.326781       1 config.go:192] "Starting service config controller"
	I0729 19:03:45.327795       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:03:45.327924       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:03:45.328783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:03:45.333509       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:03:45.329409       1 config.go:319] "Starting node config controller"
	I0729 19:03:45.339102       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:03:45.339109       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:03:45.429647       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [da78244f3f52e1f5b4d3af779690e2fabc289355b16c6706defffcd97313591b] <==
	I0729 19:10:18.869943       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:10:18.893464       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.180"]
	I0729 19:10:18.999368       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:10:18.999421       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:10:18.999439       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:10:19.005646       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:10:19.005873       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:10:19.005886       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:10:19.016643       1 config.go:192] "Starting service config controller"
	I0729 19:10:19.016673       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:10:19.016700       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:10:19.016704       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:10:19.017112       1 config.go:319] "Starting node config controller"
	I0729 19:10:19.017118       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:10:19.117769       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:10:19.117809       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:10:19.117832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [30a85e5c7caf06d03ee41c455bc520b59f5bd6c3c80de77cf2bacb8b5abacde3] <==
	E0729 19:03:27.272917       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:27.273023       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:03:27.273053       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:03:28.103972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:03:28.104127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:03:28.267509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:03:28.267593       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 19:03:28.269209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:03:28.269363       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:03:28.300741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.300786       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:28.319360       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:03:28.319456       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:03:28.345779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:03:28.345896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:03:28.408402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:03:28.408451       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 19:03:28.431153       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:03:28.431272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:03:28.442652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.442754       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:03:28.487407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:03:28.487928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 19:03:30.953426       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 19:08:31.232159       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e12bd9484ce29cfcde0e840b7dc6523157dc92fffa04df50346d0608ab8faaf5] <==
	I0729 19:10:14.952356       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:10:17.005640       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:10:17.005717       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:10:17.005745       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:10:17.005770       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:10:17.058837       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:10:17.058924       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:10:17.062589       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:10:17.063335       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:10:17.064059       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:10:17.064560       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:10:17.163725       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.068187    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98b96d50-7bc4-4e38-a093-ee0d26a7db01-xtables-lock\") pod \"kube-proxy-zzfbl\" (UID: \"98b96d50-7bc4-4e38-a093-ee0d26a7db01\") " pod="kube-system/kube-proxy-zzfbl"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.068290    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a210787-5503-4b35-899f-53cc15e43d4b-xtables-lock\") pod \"kindnet-h6x45\" (UID: \"4a210787-5503-4b35-899f-53cc15e43d4b\") " pod="kube-system/kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069085    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a210787-5503-4b35-899f-53cc15e43d4b-lib-modules\") pod \"kindnet-h6x45\" (UID: \"4a210787-5503-4b35-899f-53cc15e43d4b\") " pod="kube-system/kindnet-h6x45"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069202    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/de00f063-7d28-45e2-aa3a-39b8e8084dc8-tmp\") pod \"storage-provisioner\" (UID: \"de00f063-7d28-45e2-aa3a-39b8e8084dc8\") " pod="kube-system/storage-provisioner"
	Jul 29 19:10:18 multinode-370772 kubelet[3082]: I0729 19:10:18.069453    3082 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98b96d50-7bc4-4e38-a093-ee0d26a7db01-lib-modules\") pod \"kube-proxy-zzfbl\" (UID: \"98b96d50-7bc4-4e38-a093-ee0d26a7db01\") " pod="kube-system/kube-proxy-zzfbl"
	Jul 29 19:11:13 multinode-370772 kubelet[3082]: E0729 19:11:13.054377    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:11:13 multinode-370772 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:12:13 multinode-370772 kubelet[3082]: E0729 19:12:13.054422    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:12:13 multinode-370772 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:12:13 multinode-370772 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:12:13 multinode-370772 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:12:13 multinode-370772 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:13:13 multinode-370772 kubelet[3082]: E0729 19:13:13.055296    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:13:13 multinode-370772 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:13:13 multinode-370772 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:13:13 multinode-370772 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:13:13 multinode-370772 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:14:13 multinode-370772 kubelet[3082]: E0729 19:14:13.055406    3082 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:14:13 multinode-370772 kubelet[3082]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:14:13 multinode-370772 kubelet[3082]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:14:13 multinode-370772 kubelet[3082]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:14:13 multinode-370772 kubelet[3082]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:14:18.565267 1093228 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-370772 -n multinode-370772
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-370772 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.31s)

                                                
                                    
x
+
TestPreload (273.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 19:20:17.183293 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.988027854s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-337810 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-337810 image pull gcr.io/k8s-minikube/busybox: (1.039972956s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-337810
E0729 19:20:34.137087 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-337810: exit status 82 (2m0.454067401s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-337810"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-337810 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-07-29 19:22:23.216903434 +0000 UTC m=+3915.451746820
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-337810 -n test-preload-337810
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-337810 -n test-preload-337810: exit status 3 (18.570651114s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:22:41.783198 1096574 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.244:22: connect: no route to host
	E0729 19:22:41.783220 1096574 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.244:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-337810" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-337810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-337810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-337810: (1.129895067s)
--- FAIL: TestPreload (273.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (375.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m31.371882737s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-261955] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-261955" primary control-plane node in "kubernetes-upgrade-261955" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:24:37.324139 1097679 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:24:37.324307 1097679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:37.324318 1097679 out.go:304] Setting ErrFile to fd 2...
	I0729 19:24:37.324324 1097679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:37.324642 1097679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:24:37.325407 1097679 out.go:298] Setting JSON to false
	I0729 19:24:37.326804 1097679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11229,"bootTime":1722269848,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:24:37.326899 1097679 start.go:139] virtualization: kvm guest
	I0729 19:24:37.329232 1097679 out.go:177] * [kubernetes-upgrade-261955] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:24:37.331087 1097679 notify.go:220] Checking for updates...
	I0729 19:24:37.331810 1097679 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:24:37.333906 1097679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:24:37.336753 1097679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:24:37.339080 1097679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:24:37.341517 1097679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:24:37.343501 1097679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:24:37.344794 1097679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:24:37.383593 1097679 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:24:37.384713 1097679 start.go:297] selected driver: kvm2
	I0729 19:24:37.384733 1097679 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:24:37.384747 1097679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:24:37.385668 1097679 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:37.385736 1097679 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:24:37.401608 1097679 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:24:37.401667 1097679 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:24:37.401990 1097679 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 19:24:37.402025 1097679 cni.go:84] Creating CNI manager for ""
	I0729 19:24:37.402042 1097679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:24:37.402057 1097679 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:24:37.402131 1097679 start.go:340] cluster config:
	{Name:kubernetes-upgrade-261955 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-261955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:24:37.402298 1097679 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:37.403904 1097679 out.go:177] * Starting "kubernetes-upgrade-261955" primary control-plane node in "kubernetes-upgrade-261955" cluster
	I0729 19:24:37.405325 1097679 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:24:37.405370 1097679 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:24:37.405382 1097679 cache.go:56] Caching tarball of preloaded images
	I0729 19:24:37.405473 1097679 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:24:37.405486 1097679 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:24:37.405892 1097679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/config.json ...
	I0729 19:24:37.405920 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/config.json: {Name:mk14dd749962e4836f667088e9ef74d6325c053e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:24:37.406069 1097679 start.go:360] acquireMachinesLock for kubernetes-upgrade-261955: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:24:37.406103 1097679 start.go:364] duration metric: took 17.622µs to acquireMachinesLock for "kubernetes-upgrade-261955"
	I0729 19:24:37.406133 1097679 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-261955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-261955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:24:37.406180 1097679 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 19:24:37.407623 1097679 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 19:24:37.407746 1097679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:24:37.407781 1097679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:24:37.422658 1097679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35069
	I0729 19:24:37.423162 1097679 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:24:37.423805 1097679 main.go:141] libmachine: Using API Version  1
	I0729 19:24:37.423828 1097679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:24:37.424188 1097679 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:24:37.424370 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetMachineName
	I0729 19:24:37.424537 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:24:37.424720 1097679 start.go:159] libmachine.API.Create for "kubernetes-upgrade-261955" (driver="kvm2")
	I0729 19:24:37.424748 1097679 client.go:168] LocalClient.Create starting
	I0729 19:24:37.424782 1097679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 19:24:37.424828 1097679 main.go:141] libmachine: Decoding PEM data...
	I0729 19:24:37.424851 1097679 main.go:141] libmachine: Parsing certificate...
	I0729 19:24:37.424920 1097679 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 19:24:37.424951 1097679 main.go:141] libmachine: Decoding PEM data...
	I0729 19:24:37.424967 1097679 main.go:141] libmachine: Parsing certificate...
	I0729 19:24:37.424992 1097679 main.go:141] libmachine: Running pre-create checks...
	I0729 19:24:37.425010 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .PreCreateCheck
	I0729 19:24:37.425332 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetConfigRaw
	I0729 19:24:37.425735 1097679 main.go:141] libmachine: Creating machine...
	I0729 19:24:37.425751 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Create
	I0729 19:24:37.425876 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Creating KVM machine...
	I0729 19:24:37.427205 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found existing default KVM network
	I0729 19:24:37.428100 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:37.427931 1097753 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1c0}
	I0729 19:24:37.428140 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | created network xml: 
	I0729 19:24:37.428154 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | <network>
	I0729 19:24:37.428167 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   <name>mk-kubernetes-upgrade-261955</name>
	I0729 19:24:37.428177 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   <dns enable='no'/>
	I0729 19:24:37.428187 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   
	I0729 19:24:37.428204 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 19:24:37.428219 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |     <dhcp>
	I0729 19:24:37.428230 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 19:24:37.428241 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |     </dhcp>
	I0729 19:24:37.428250 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   </ip>
	I0729 19:24:37.428260 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG |   
	I0729 19:24:37.428272 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | </network>
	I0729 19:24:37.428282 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | 
	I0729 19:24:37.432955 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | trying to create private KVM network mk-kubernetes-upgrade-261955 192.168.39.0/24...
	I0729 19:24:37.500370 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | private KVM network mk-kubernetes-upgrade-261955 192.168.39.0/24 created
	I0729 19:24:37.500399 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:37.500344 1097753 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:24:37.500432 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955 ...
	I0729 19:24:37.500450 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 19:24:37.500493 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 19:24:37.761046 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:37.760903 1097753 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa...
	I0729 19:24:37.962927 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:37.962792 1097753 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/kubernetes-upgrade-261955.rawdisk...
	I0729 19:24:37.962958 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Writing magic tar header
	I0729 19:24:37.962991 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Writing SSH key tar header
	I0729 19:24:37.963024 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:37.962943 1097753 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955 ...
	I0729 19:24:37.963089 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955
	I0729 19:24:37.963104 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 19:24:37.963114 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955 (perms=drwx------)
	I0729 19:24:37.963129 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 19:24:37.963138 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 19:24:37.963145 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:24:37.963156 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 19:24:37.963162 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 19:24:37.963177 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home/jenkins
	I0729 19:24:37.963185 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Checking permissions on dir: /home
	I0729 19:24:37.963195 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Skipping /home - not owner
	I0729 19:24:37.963205 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 19:24:37.963213 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 19:24:37.963223 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 19:24:37.963230 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Creating domain...
	I0729 19:24:37.964281 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) define libvirt domain using xml: 
	I0729 19:24:37.964292 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) <domain type='kvm'>
	I0729 19:24:37.964300 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <name>kubernetes-upgrade-261955</name>
	I0729 19:24:37.964305 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <memory unit='MiB'>2200</memory>
	I0729 19:24:37.964336 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <vcpu>2</vcpu>
	I0729 19:24:37.964359 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <features>
	I0729 19:24:37.964373 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <acpi/>
	I0729 19:24:37.964388 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <apic/>
	I0729 19:24:37.964401 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <pae/>
	I0729 19:24:37.964413 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     
	I0729 19:24:37.964427 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   </features>
	I0729 19:24:37.964438 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <cpu mode='host-passthrough'>
	I0729 19:24:37.964450 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   
	I0729 19:24:37.964460 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   </cpu>
	I0729 19:24:37.964471 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <os>
	I0729 19:24:37.964482 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <type>hvm</type>
	I0729 19:24:37.964495 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <boot dev='cdrom'/>
	I0729 19:24:37.964511 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <boot dev='hd'/>
	I0729 19:24:37.964550 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <bootmenu enable='no'/>
	I0729 19:24:37.964562 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   </os>
	I0729 19:24:37.964574 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   <devices>
	I0729 19:24:37.964586 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <disk type='file' device='cdrom'>
	I0729 19:24:37.964677 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/boot2docker.iso'/>
	I0729 19:24:37.964730 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <target dev='hdc' bus='scsi'/>
	I0729 19:24:37.964749 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <readonly/>
	I0729 19:24:37.964764 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </disk>
	I0729 19:24:37.964778 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <disk type='file' device='disk'>
	I0729 19:24:37.964789 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 19:24:37.964806 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/kubernetes-upgrade-261955.rawdisk'/>
	I0729 19:24:37.964824 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <target dev='hda' bus='virtio'/>
	I0729 19:24:37.964846 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </disk>
	I0729 19:24:37.964859 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <interface type='network'>
	I0729 19:24:37.964892 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <source network='mk-kubernetes-upgrade-261955'/>
	I0729 19:24:37.964916 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <model type='virtio'/>
	I0729 19:24:37.964930 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </interface>
	I0729 19:24:37.964939 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <interface type='network'>
	I0729 19:24:37.964951 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <source network='default'/>
	I0729 19:24:37.964963 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <model type='virtio'/>
	I0729 19:24:37.964976 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </interface>
	I0729 19:24:37.964986 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <serial type='pty'>
	I0729 19:24:37.965003 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <target port='0'/>
	I0729 19:24:37.965018 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </serial>
	I0729 19:24:37.965028 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <console type='pty'>
	I0729 19:24:37.965040 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <target type='serial' port='0'/>
	I0729 19:24:37.965052 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </console>
	I0729 19:24:37.965062 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     <rng model='virtio'>
	I0729 19:24:37.965073 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)       <backend model='random'>/dev/random</backend>
	I0729 19:24:37.965083 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     </rng>
	I0729 19:24:37.965089 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     
	I0729 19:24:37.965102 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)     
	I0729 19:24:37.965120 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955)   </devices>
	I0729 19:24:37.965136 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) </domain>
	I0729 19:24:37.965151 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) 
	I0729 19:24:37.968761 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:86:78:c9 in network default
	I0729 19:24:37.969260 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Ensuring networks are active...
	I0729 19:24:37.969278 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:37.969845 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Ensuring network default is active
	I0729 19:24:37.970067 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Ensuring network mk-kubernetes-upgrade-261955 is active
	I0729 19:24:37.970455 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Getting domain xml...
	I0729 19:24:37.971079 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Creating domain...
	I0729 19:24:39.153395 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Waiting to get IP...
	I0729 19:24:39.154141 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.154492 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.154540 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:39.154483 1097753 retry.go:31] will retry after 271.817741ms: waiting for machine to come up
	I0729 19:24:39.427964 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.428382 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.428407 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:39.428332 1097753 retry.go:31] will retry after 242.605179ms: waiting for machine to come up
	I0729 19:24:39.672642 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.673041 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:39.673069 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:39.672974 1097753 retry.go:31] will retry after 437.413662ms: waiting for machine to come up
	I0729 19:24:40.111501 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:40.111989 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:40.112027 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:40.111935 1097753 retry.go:31] will retry after 416.004932ms: waiting for machine to come up
	I0729 19:24:40.529548 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:40.530006 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:40.530038 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:40.529956 1097753 retry.go:31] will retry after 655.261813ms: waiting for machine to come up
	I0729 19:24:41.186667 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:41.187193 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:41.187231 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:41.187136 1097753 retry.go:31] will retry after 711.305398ms: waiting for machine to come up
	I0729 19:24:41.899866 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:41.900209 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:41.900246 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:41.900189 1097753 retry.go:31] will retry after 853.499297ms: waiting for machine to come up
	I0729 19:24:42.755734 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:42.756239 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:42.756269 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:42.756180 1097753 retry.go:31] will retry after 1.34502257s: waiting for machine to come up
	I0729 19:24:44.102714 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:44.103085 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:44.103116 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:44.103028 1097753 retry.go:31] will retry after 1.774180535s: waiting for machine to come up
	I0729 19:24:45.879981 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:45.880373 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:45.880402 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:45.880315 1097753 retry.go:31] will retry after 1.521735641s: waiting for machine to come up
	I0729 19:24:47.404002 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:47.404562 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:47.404603 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:47.404506 1097753 retry.go:31] will retry after 2.713545299s: waiting for machine to come up
	I0729 19:24:50.119139 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:50.119526 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:50.119550 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:50.119480 1097753 retry.go:31] will retry after 2.647623734s: waiting for machine to come up
	I0729 19:24:52.768934 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:52.769491 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:52.769512 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:52.769448 1097753 retry.go:31] will retry after 3.464763113s: waiting for machine to come up
	I0729 19:24:56.238016 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:24:56.238388 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find current IP address of domain kubernetes-upgrade-261955 in network mk-kubernetes-upgrade-261955
	I0729 19:24:56.238415 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | I0729 19:24:56.238338 1097753 retry.go:31] will retry after 5.438309841s: waiting for machine to come up
	I0729 19:25:01.681710 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.682194 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has current primary IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.682212 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Found IP for machine: 192.168.39.144
	I0729 19:25:01.682223 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Reserving static IP address...
	I0729 19:25:01.682590 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-261955", mac: "52:54:00:00:4f:43", ip: "192.168.39.144"} in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.757900 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Getting to WaitForSSH function...
	I0729 19:25:01.757941 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Reserved static IP address: 192.168.39.144
	I0729 19:25:01.757956 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Waiting for SSH to be available...
	I0729 19:25:01.760451 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.760839 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:01.760876 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.760971 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Using SSH client type: external
	I0729 19:25:01.760992 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa (-rw-------)
	I0729 19:25:01.761026 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:25:01.761045 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | About to run SSH command:
	I0729 19:25:01.761062 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | exit 0
	I0729 19:25:01.882895 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | SSH cmd err, output: <nil>: 
	I0729 19:25:01.883101 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) KVM machine creation complete!
	I0729 19:25:01.883509 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetConfigRaw
	I0729 19:25:01.884080 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:01.884248 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:01.884415 1097679 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 19:25:01.884429 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetState
	I0729 19:25:01.885587 1097679 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 19:25:01.885602 1097679 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 19:25:01.885610 1097679 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 19:25:01.885617 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:01.887797 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.888103 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:01.888127 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.888250 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:01.888423 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:01.888599 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:01.888779 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:01.888979 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:01.889240 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:01.889259 1097679 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 19:25:01.990301 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:25:01.990330 1097679 main.go:141] libmachine: Detecting the provisioner...
	I0729 19:25:01.990341 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:01.992984 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.993341 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:01.993386 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:01.993516 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:01.993727 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:01.993890 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:01.994034 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:01.994194 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:01.994412 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:01.994432 1097679 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 19:25:02.095851 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 19:25:02.095931 1097679 main.go:141] libmachine: found compatible host: buildroot
	I0729 19:25:02.095938 1097679 main.go:141] libmachine: Provisioning with buildroot...
	I0729 19:25:02.095948 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetMachineName
	I0729 19:25:02.096205 1097679 buildroot.go:166] provisioning hostname "kubernetes-upgrade-261955"
	I0729 19:25:02.096246 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetMachineName
	I0729 19:25:02.096433 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.099041 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.099373 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.099400 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.099604 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.099791 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.099901 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.100053 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.100237 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:02.100445 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:02.100461 1097679 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-261955 && echo "kubernetes-upgrade-261955" | sudo tee /etc/hostname
	I0729 19:25:02.215463 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-261955
	
	I0729 19:25:02.215493 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.217942 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.218268 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.218307 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.218493 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.218689 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.218863 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.219001 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.219313 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:02.219494 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:02.219511 1097679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-261955' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-261955/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-261955' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:25:02.334348 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:25:02.334394 1097679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:25:02.334415 1097679 buildroot.go:174] setting up certificates
	I0729 19:25:02.334426 1097679 provision.go:84] configureAuth start
	I0729 19:25:02.334435 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetMachineName
	I0729 19:25:02.334724 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetIP
	I0729 19:25:02.336886 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.337185 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.337221 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.337311 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.339386 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.339668 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.339703 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.339796 1097679 provision.go:143] copyHostCerts
	I0729 19:25:02.339847 1097679 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:25:02.339856 1097679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:25:02.339924 1097679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:25:02.340048 1097679 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:25:02.340059 1097679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:25:02.340087 1097679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:25:02.340152 1097679 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:25:02.340159 1097679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:25:02.340180 1097679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:25:02.340238 1097679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-261955 san=[127.0.0.1 192.168.39.144 kubernetes-upgrade-261955 localhost minikube]
	I0729 19:25:02.397799 1097679 provision.go:177] copyRemoteCerts
	I0729 19:25:02.397861 1097679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:25:02.397887 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.400683 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.401017 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.401044 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.401244 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.401434 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.401585 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.401712 1097679 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:25:02.481298 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:25:02.506142 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 19:25:02.530589 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:25:02.555140 1097679 provision.go:87] duration metric: took 220.70012ms to configureAuth
	I0729 19:25:02.555164 1097679 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:25:02.555342 1097679 config.go:182] Loaded profile config "kubernetes-upgrade-261955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:25:02.555446 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.558061 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.558386 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.558415 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.558610 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.558785 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.558974 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.559105 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.559261 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:02.559437 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:02.559451 1097679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:25:02.827888 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:25:02.827923 1097679 main.go:141] libmachine: Checking connection to Docker...
	I0729 19:25:02.827935 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetURL
	I0729 19:25:02.829199 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Using libvirt version 6000000
	I0729 19:25:02.831301 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.831669 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.831696 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.831849 1097679 main.go:141] libmachine: Docker is up and running!
	I0729 19:25:02.831866 1097679 main.go:141] libmachine: Reticulating splines...
	I0729 19:25:02.831873 1097679 client.go:171] duration metric: took 25.407117311s to LocalClient.Create
	I0729 19:25:02.831897 1097679 start.go:167] duration metric: took 25.407177255s to libmachine.API.Create "kubernetes-upgrade-261955"
	I0729 19:25:02.831910 1097679 start.go:293] postStartSetup for "kubernetes-upgrade-261955" (driver="kvm2")
	I0729 19:25:02.831924 1097679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:25:02.831947 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:02.832180 1097679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:25:02.832210 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.834270 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.834527 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.834548 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.834678 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.834921 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.835096 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.835216 1097679 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:25:02.913010 1097679 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:25:02.917229 1097679 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:25:02.917267 1097679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:25:02.917341 1097679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:25:02.917457 1097679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:25:02.917561 1097679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:25:02.927061 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:25:02.951491 1097679 start.go:296] duration metric: took 119.563768ms for postStartSetup
	I0729 19:25:02.951556 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetConfigRaw
	I0729 19:25:02.952178 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetIP
	I0729 19:25:02.954930 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.955300 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.955347 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.955569 1097679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/config.json ...
	I0729 19:25:02.955801 1097679 start.go:128] duration metric: took 25.54961055s to createHost
	I0729 19:25:02.955840 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:02.957959 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.958274 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:02.958303 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:02.958430 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:02.958637 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.958797 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:02.958937 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:02.959135 1097679 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:02.959330 1097679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0729 19:25:02.959354 1097679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 19:25:03.059944 1097679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722281103.038190691
	
	I0729 19:25:03.059967 1097679 fix.go:216] guest clock: 1722281103.038190691
	I0729 19:25:03.059975 1097679 fix.go:229] Guest: 2024-07-29 19:25:03.038190691 +0000 UTC Remote: 2024-07-29 19:25:02.955814041 +0000 UTC m=+25.680178925 (delta=82.37665ms)
	I0729 19:25:03.059996 1097679 fix.go:200] guest clock delta is within tolerance: 82.37665ms
	I0729 19:25:03.060001 1097679 start.go:83] releasing machines lock for "kubernetes-upgrade-261955", held for 25.653889517s
	I0729 19:25:03.060027 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:03.060487 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetIP
	I0729 19:25:03.063465 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.063864 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:03.063893 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.064045 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:03.064551 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:03.064720 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:25:03.064824 1097679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:25:03.064862 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:03.064892 1097679 ssh_runner.go:195] Run: cat /version.json
	I0729 19:25:03.064911 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:25:03.067556 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.067770 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.067870 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:03.067909 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.068053 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:03.068140 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:03.068166 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:03.068227 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:03.068383 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:25:03.068415 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:03.068603 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:25:03.068598 1097679 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:25:03.068774 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:25:03.068918 1097679 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:25:03.177740 1097679 ssh_runner.go:195] Run: systemctl --version
	I0729 19:25:03.184559 1097679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:25:03.353125 1097679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:25:03.360284 1097679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:25:03.360368 1097679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:25:03.377548 1097679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:25:03.377572 1097679 start.go:495] detecting cgroup driver to use...
	I0729 19:25:03.377646 1097679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:25:03.396662 1097679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:25:03.410876 1097679 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:25:03.410971 1097679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:25:03.424968 1097679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:25:03.439110 1097679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:25:03.554468 1097679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:25:03.702523 1097679 docker.go:233] disabling docker service ...
	I0729 19:25:03.702602 1097679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:25:03.717727 1097679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:25:03.731948 1097679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:25:03.863073 1097679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:25:03.996115 1097679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:25:04.010348 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:25:04.031701 1097679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:25:04.031775 1097679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:04.044376 1097679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:25:04.044447 1097679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:04.055508 1097679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:04.066370 1097679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:04.077190 1097679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:25:04.088180 1097679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:25:04.098015 1097679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:25:04.098069 1097679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:25:04.111489 1097679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:25:04.121586 1097679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:25:04.260221 1097679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:25:04.414137 1097679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:25:04.414221 1097679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:25:04.419278 1097679 start.go:563] Will wait 60s for crictl version
	I0729 19:25:04.419339 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:04.423231 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:25:04.472608 1097679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:25:04.472685 1097679 ssh_runner.go:195] Run: crio --version
	I0729 19:25:04.507082 1097679 ssh_runner.go:195] Run: crio --version
	I0729 19:25:04.546966 1097679 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:25:04.548176 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetIP
	I0729 19:25:04.550987 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:04.551352 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:25:04.551379 1097679 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:25:04.551614 1097679 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:25:04.556059 1097679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:25:04.569435 1097679 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-261955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-261955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:25:04.569562 1097679 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:25:04.569609 1097679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:25:04.602508 1097679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:25:04.602587 1097679 ssh_runner.go:195] Run: which lz4
	I0729 19:25:04.606699 1097679 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 19:25:04.611046 1097679 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:25:04.611077 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:25:06.328985 1097679 crio.go:462] duration metric: took 1.72233024s to copy over tarball
	I0729 19:25:06.329060 1097679 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:25:08.836146 1097679 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.507048569s)
	I0729 19:25:08.836184 1097679 crio.go:469] duration metric: took 2.50716836s to extract the tarball
	I0729 19:25:08.836194 1097679 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:25:08.879144 1097679 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:25:08.930535 1097679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:25:08.930567 1097679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:25:08.930651 1097679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:25:08.930664 1097679 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:08.930678 1097679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:08.930692 1097679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:08.930710 1097679 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:25:08.930734 1097679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:08.930662 1097679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:08.930788 1097679 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:25:08.932277 1097679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:08.932335 1097679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:08.932356 1097679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:25:08.932278 1097679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:08.932444 1097679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:08.932284 1097679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:08.932284 1097679 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:25:08.932605 1097679 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:25:09.133376 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:09.138447 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:25:09.151164 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:09.175573 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:09.198204 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:25:09.201455 1097679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:25:09.201515 1097679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:09.201558 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.205849 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:09.230648 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:09.234987 1097679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:25:09.253833 1097679 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:25:09.253874 1097679 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:25:09.253914 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.275744 1097679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:25:09.275797 1097679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:25:09.275844 1097679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:09.275895 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.275799 1097679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:09.275963 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.341582 1097679 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:25:09.341645 1097679 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:25:09.341648 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:09.341688 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.346678 1097679 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:25:09.346723 1097679 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:09.346761 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.365239 1097679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:25:09.365285 1097679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:09.365345 1097679 ssh_runner.go:195] Run: which crictl
	I0729 19:25:09.441731 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:25:09.441812 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:09.441788 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:09.441876 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:25:09.441892 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:09.441938 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:09.442004 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:09.605119 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:09.605141 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:25:09.605192 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:09.605284 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:09.605320 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:25:09.605393 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:09.605395 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:25:09.736850 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:25:09.767102 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:25:09.767179 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:25:09.767227 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:25:09.767143 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:25:09.767241 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:25:09.767306 1097679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:25:09.829446 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:25:09.877809 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:25:09.895836 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:25:09.901003 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:25:09.906779 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:25:09.906970 1097679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:25:09.907028 1097679 cache_images.go:92] duration metric: took 976.445472ms to LoadCachedImages
	W0729 19:25:09.907119 1097679 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0729 19:25:09.907138 1097679 kubeadm.go:934] updating node { 192.168.39.144 8443 v1.20.0 crio true true} ...
	I0729 19:25:09.907275 1097679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-261955 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-261955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:25:09.907358 1097679 ssh_runner.go:195] Run: crio config
	I0729 19:25:09.976504 1097679 cni.go:84] Creating CNI manager for ""
	I0729 19:25:09.976525 1097679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:25:09.976537 1097679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:25:09.976564 1097679 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-261955 NodeName:kubernetes-upgrade-261955 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:25:09.976730 1097679 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-261955"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:25:09.976810 1097679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:25:09.987605 1097679 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:25:09.987691 1097679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:25:09.999354 1097679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0729 19:25:10.017011 1097679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:25:10.034251 1097679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0729 19:25:10.051275 1097679 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0729 19:25:10.055306 1097679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:25:10.068284 1097679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:25:10.218053 1097679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:25:10.239582 1097679 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955 for IP: 192.168.39.144
	I0729 19:25:10.239614 1097679 certs.go:194] generating shared ca certs ...
	I0729 19:25:10.239638 1097679 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.239813 1097679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:25:10.239868 1097679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:25:10.239881 1097679 certs.go:256] generating profile certs ...
	I0729 19:25:10.239952 1097679 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.key
	I0729 19:25:10.239970 1097679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.crt with IP's: []
	I0729 19:25:10.436322 1097679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.crt ...
	I0729 19:25:10.436361 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.crt: {Name:mk2a8ae9f06f0919259531671e142d83afea3e6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.436571 1097679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.key ...
	I0729 19:25:10.436592 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.key: {Name:mk6dec426172337368cb4d02fcc8e944e17f58a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.436741 1097679 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key.327e7a96
	I0729 19:25:10.436770 1097679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt.327e7a96 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.144]
	I0729 19:25:10.790468 1097679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt.327e7a96 ...
	I0729 19:25:10.790510 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt.327e7a96: {Name:mk53b20984b943cdc7c4b60e730ecd7a437a44f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.790703 1097679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key.327e7a96 ...
	I0729 19:25:10.790721 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key.327e7a96: {Name:mk8fe8fda3fc84c568c3eb290c40846973486e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.790804 1097679 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt.327e7a96 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt
	I0729 19:25:10.790917 1097679 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key.327e7a96 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key
	I0729 19:25:10.790978 1097679 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.key
	I0729 19:25:10.790996 1097679 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.crt with IP's: []
	I0729 19:25:10.868211 1097679 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.crt ...
	I0729 19:25:10.868247 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.crt: {Name:mkfbaa87971ea9cdc16f1ed8211237371d776f54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.868426 1097679 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.key ...
	I0729 19:25:10.868441 1097679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.key: {Name:mkfff776c18fbc5473483117853fdaa0dc9105bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:10.868655 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:25:10.868701 1097679 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:25:10.868714 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:25:10.868744 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:25:10.868780 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:25:10.868808 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:25:10.868870 1097679 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:25:10.870192 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:25:10.898913 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:25:10.925233 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:25:10.949850 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:25:10.974272 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 19:25:10.999892 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:25:11.023964 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:25:11.050696 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:25:11.077953 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:25:11.110889 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:25:11.142735 1097679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:25:11.166779 1097679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:25:11.183602 1097679 ssh_runner.go:195] Run: openssl version
	I0729 19:25:11.189492 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:25:11.200657 1097679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:11.205751 1097679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:11.205816 1097679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:11.211823 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:25:11.223434 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:25:11.234747 1097679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:25:11.239480 1097679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:25:11.239549 1097679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:25:11.245377 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:25:11.256626 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:25:11.267759 1097679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:25:11.272548 1097679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:25:11.272609 1097679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:25:11.278888 1097679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:25:11.289658 1097679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:25:11.293952 1097679 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 19:25:11.294018 1097679 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-261955 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-261955 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:25:11.294114 1097679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:25:11.294174 1097679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:25:11.330473 1097679 cri.go:89] found id: ""
	I0729 19:25:11.330561 1097679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:25:11.341442 1097679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:25:11.351805 1097679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:25:11.361850 1097679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:25:11.361876 1097679 kubeadm.go:157] found existing configuration files:
	
	I0729 19:25:11.361931 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:25:11.371352 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:25:11.371425 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:25:11.380984 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:25:11.390128 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:25:11.390207 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:25:11.400079 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:25:11.409311 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:25:11.409375 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:25:11.422391 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:25:11.431653 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:25:11.431719 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:25:11.441421 1097679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:25:11.745726 1097679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:27:09.929684 1097679 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:27:09.929826 1097679 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:27:09.931244 1097679 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:27:09.931302 1097679 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:27:09.931365 1097679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:27:09.931448 1097679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:27:09.931575 1097679 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:27:09.931659 1097679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:27:09.933232 1097679 out.go:204]   - Generating certificates and keys ...
	I0729 19:27:09.933302 1097679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:27:09.933361 1097679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:27:09.933425 1097679 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 19:27:09.933472 1097679 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 19:27:09.933554 1097679 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 19:27:09.933619 1097679 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 19:27:09.933674 1097679 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 19:27:09.933831 1097679 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0729 19:27:09.933877 1097679 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 19:27:09.934018 1097679 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	I0729 19:27:09.934079 1097679 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 19:27:09.934159 1097679 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 19:27:09.934228 1097679 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 19:27:09.934307 1097679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:27:09.934407 1097679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:27:09.934501 1097679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:27:09.934596 1097679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:27:09.934656 1097679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:27:09.934769 1097679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:27:09.934868 1097679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:27:09.934909 1097679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:27:09.934981 1097679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:27:09.936855 1097679 out.go:204]   - Booting up control plane ...
	I0729 19:27:09.936931 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:27:09.936994 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:27:09.937065 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:27:09.937146 1097679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:27:09.937364 1097679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:27:09.937424 1097679 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:27:09.937483 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:09.937682 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:09.937776 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:09.937968 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:09.938031 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:09.938274 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:09.938352 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:09.938509 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:09.938566 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:09.938756 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:09.938769 1097679 kubeadm.go:310] 
	I0729 19:27:09.938803 1097679 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:27:09.938839 1097679 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:27:09.938870 1097679 kubeadm.go:310] 
	I0729 19:27:09.938911 1097679 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:27:09.938939 1097679 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:27:09.939059 1097679 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:27:09.939075 1097679 kubeadm.go:310] 
	I0729 19:27:09.939186 1097679 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:27:09.939217 1097679 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:27:09.939252 1097679 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:27:09.939259 1097679 kubeadm.go:310] 
	I0729 19:27:09.939355 1097679 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:27:09.939442 1097679 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:27:09.939453 1097679 kubeadm.go:310] 
	I0729 19:27:09.939571 1097679 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:27:09.939660 1097679 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:27:09.939730 1097679 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:27:09.939789 1097679 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:27:09.939811 1097679 kubeadm.go:310] 
	W0729 19:27:09.939921 1097679 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-261955 localhost] and IPs [192.168.39.144 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:27:09.939972 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:27:11.158170 1097679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.218157037s)
	I0729 19:27:11.158266 1097679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:27:11.174718 1097679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:27:11.184562 1097679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:27:11.184582 1097679 kubeadm.go:157] found existing configuration files:
	
	I0729 19:27:11.184632 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:27:11.194132 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:27:11.194204 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:27:11.203821 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:27:11.213032 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:27:11.213084 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:27:11.222488 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:27:11.231301 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:27:11.231345 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:27:11.240481 1097679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:27:11.249406 1097679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:27:11.249464 1097679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:27:11.261070 1097679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:27:11.329237 1097679 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:27:11.329334 1097679 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:27:11.480088 1097679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:27:11.480261 1097679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:27:11.480394 1097679 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:27:11.665231 1097679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:27:11.666778 1097679 out.go:204]   - Generating certificates and keys ...
	I0729 19:27:11.666897 1097679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:27:11.666974 1097679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:27:11.667076 1097679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:27:11.667200 1097679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:27:11.667321 1097679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:27:11.667401 1097679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:27:11.667495 1097679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:27:11.667591 1097679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:27:11.667696 1097679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:27:11.667800 1097679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:27:11.667856 1097679 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:27:11.667927 1097679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:27:11.888929 1097679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:27:12.224211 1097679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:27:12.601567 1097679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:27:12.844246 1097679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:27:12.859658 1097679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:27:12.859795 1097679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:27:12.859855 1097679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:27:13.005123 1097679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:27:13.006973 1097679 out.go:204]   - Booting up control plane ...
	I0729 19:27:13.007095 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:27:13.007197 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:27:13.007948 1097679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:27:13.016751 1097679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:27:13.019975 1097679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:27:53.022995 1097679 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:27:53.023321 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:53.023502 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:27:58.024215 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:27:58.024376 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:28:08.024969 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:28:08.025256 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:28:28.024672 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:28:28.024953 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:29:08.024632 1097679 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:29:08.024858 1097679 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:29:08.024871 1097679 kubeadm.go:310] 
	I0729 19:29:08.024925 1097679 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:29:08.024983 1097679 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:29:08.024994 1097679 kubeadm.go:310] 
	I0729 19:29:08.025042 1097679 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:29:08.025092 1097679 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:29:08.025239 1097679 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:29:08.025248 1097679 kubeadm.go:310] 
	I0729 19:29:08.025412 1097679 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:29:08.025480 1097679 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:29:08.025533 1097679 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:29:08.025546 1097679 kubeadm.go:310] 
	I0729 19:29:08.025702 1097679 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:29:08.025815 1097679 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:29:08.025825 1097679 kubeadm.go:310] 
	I0729 19:29:08.025983 1097679 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:29:08.026105 1097679 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:29:08.026229 1097679 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:29:08.026342 1097679 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:29:08.026355 1097679 kubeadm.go:310] 
	I0729 19:29:08.027332 1097679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:29:08.027466 1097679 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:29:08.027563 1097679 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:29:08.027687 1097679 kubeadm.go:394] duration metric: took 3m56.733671953s to StartCluster
	I0729 19:29:08.027767 1097679 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:29:08.027839 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:29:08.073393 1097679 cri.go:89] found id: ""
	I0729 19:29:08.073418 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.073427 1097679 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:29:08.073433 1097679 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:29:08.073495 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:29:08.114605 1097679 cri.go:89] found id: ""
	I0729 19:29:08.114640 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.114651 1097679 logs.go:278] No container was found matching "etcd"
	I0729 19:29:08.114658 1097679 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:29:08.114810 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:29:08.150043 1097679 cri.go:89] found id: ""
	I0729 19:29:08.150075 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.150087 1097679 logs.go:278] No container was found matching "coredns"
	I0729 19:29:08.150094 1097679 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:29:08.150160 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:29:08.183662 1097679 cri.go:89] found id: ""
	I0729 19:29:08.183696 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.183707 1097679 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:29:08.183715 1097679 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:29:08.183794 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:29:08.218101 1097679 cri.go:89] found id: ""
	I0729 19:29:08.218130 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.218139 1097679 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:29:08.218145 1097679 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:29:08.218205 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:29:08.249736 1097679 cri.go:89] found id: ""
	I0729 19:29:08.249767 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.249778 1097679 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:29:08.249786 1097679 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:29:08.249847 1097679 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:29:08.283598 1097679 cri.go:89] found id: ""
	I0729 19:29:08.283633 1097679 logs.go:276] 0 containers: []
	W0729 19:29:08.283644 1097679 logs.go:278] No container was found matching "kindnet"
	I0729 19:29:08.283664 1097679 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:29:08.283680 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:29:08.398784 1097679 logs.go:123] Gathering logs for container status ...
	I0729 19:29:08.398822 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:29:08.440700 1097679 logs.go:123] Gathering logs for kubelet ...
	I0729 19:29:08.440738 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:29:08.496280 1097679 logs.go:123] Gathering logs for dmesg ...
	I0729 19:29:08.496320 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:29:08.510885 1097679 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:29:08.510915 1097679 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:29:08.629529 1097679 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0729 19:29:08.629592 1097679 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:29:08.629640 1097679 out.go:239] * 
	* 
	W0729 19:29:08.629746 1097679 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:29:08.629782 1097679 out.go:239] * 
	* 
	W0729 19:29:08.630616 1097679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:29:08.633964 1097679 out.go:177] 
	W0729 19:29:08.635141 1097679 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:29:08.635201 1097679 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:29:08.635227 1097679 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:29:08.636508 1097679 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-261955
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-261955: (2.293741549s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-261955 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-261955 status --format={{.Host}}: exit status 7 (64.443198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.266907267s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-261955 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (104.277048ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-261955] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-261955
	    minikube start -p kubernetes-upgrade-261955 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2619552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-261955 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-261955 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.270632589s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 19:30:49.77792289 +0000 UTC m=+4422.012766277
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-261955 -n kubernetes-upgrade-261955
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-261955 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-261955 logs -n 25: (1.673552161s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-184620 sudo                 | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-184620 sudo                 | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-184620 sudo                 | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-184620 sudo find            | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-184620 sudo crio            | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-184620                      | cilium-184620             | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:28 UTC |
	| start   | -p running-upgrade-933580             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:29 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | cert-options-460863 ssh               | cert-options-460863       | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:28 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-460863 -- sudo        | cert-options-460863       | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:28 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-460863                | cert-options-460863       | jenkins | v1.33.1 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:28 UTC |
	| start   | -p stopped-upgrade-336676             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 19:28 UTC | 29 Jul 24 19:29 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-261955          | kubernetes-upgrade-261955 | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:29 UTC |
	| start   | -p kubernetes-upgrade-261955          | kubernetes-upgrade-261955 | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:29 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-933580             | running-upgrade-933580    | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-336676 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:29 UTC |
	| start   | -p stopped-upgrade-336676             | stopped-upgrade-336676    | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-261955          | kubernetes-upgrade-261955 | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-261955          | kubernetes-upgrade-261955 | jenkins | v1.33.1 | 29 Jul 24 19:29 UTC | 29 Jul 24 19:30 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-183319             | cert-expiration-183319    | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC | 29 Jul 24 19:30 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-336676             | stopped-upgrade-336676    | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC | 29 Jul 24 19:30 UTC |
	| start   | -p pause-464015 --memory=2048         | pause-464015              | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-933580             | running-upgrade-933580    | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC | 29 Jul 24 19:30 UTC |
	| start   | -p auto-184620 --memory=3072          | auto-184620               | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-183319             | cert-expiration-183319    | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC | 29 Jul 24 19:30 UTC |
	| start   | -p kindnet-184620                     | kindnet-184620            | jenkins | v1.33.1 | 29 Jul 24 19:30 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:30:45
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:30:45.195961 1105342 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:30:45.196238 1105342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:30:45.196248 1105342 out.go:304] Setting ErrFile to fd 2...
	I0729 19:30:45.196252 1105342 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:30:45.196493 1105342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:30:45.197102 1105342 out.go:298] Setting JSON to false
	I0729 19:30:45.198183 1105342 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11597,"bootTime":1722269848,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:30:45.198246 1105342 start.go:139] virtualization: kvm guest
	I0729 19:30:45.200503 1105342 out.go:177] * [kindnet-184620] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:30:45.201739 1105342 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:30:45.201760 1105342 notify.go:220] Checking for updates...
	I0729 19:30:45.204169 1105342 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:30:45.205273 1105342 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:30:45.206327 1105342 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:30:45.207310 1105342 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:30:45.208441 1105342 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:30:45.210232 1105342 config.go:182] Loaded profile config "auto-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:30:45.210367 1105342 config.go:182] Loaded profile config "kubernetes-upgrade-261955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:30:45.210471 1105342 config.go:182] Loaded profile config "pause-464015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:30:45.210566 1105342 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:30:45.246106 1105342 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:30:45.247312 1105342 start.go:297] selected driver: kvm2
	I0729 19:30:45.247329 1105342 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:30:45.247342 1105342 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:30:45.248316 1105342 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:30:45.248405 1105342 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:30:45.263450 1105342 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:30:45.263503 1105342 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:30:45.263722 1105342 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:30:45.263747 1105342 cni.go:84] Creating CNI manager for "kindnet"
	I0729 19:30:45.263752 1105342 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 19:30:45.263805 1105342 start.go:340] cluster config:
	{Name:kindnet-184620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-184620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:30:45.263904 1105342 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:30:45.265878 1105342 out.go:177] * Starting "kindnet-184620" primary control-plane node in "kindnet-184620" cluster
	I0729 19:30:41.257263 1104540 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240 d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9 6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3 24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2 9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70 f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916 64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93 ffdf933463225ce8f572adc07361432d655f0419c580c27942c268967dd59cc3 73edd49da2e89228a2df0c3f9ea29773838aa939c171609beed04a3240e1da13 22c2a521e3501e9894de6c6abd25fa0a4e820ffa0d653bb72a1842949248d951 b8066a8d42cd2c317be9c5cfb5ff41e6f8c223d4360d3978699a4b543f355bb9 a53b6cde54af9f46243d4e586b02b6bfc7c8665eae6caf8a59a3d86c42f0ac2e 97e35f
af441e63ce273c2f9bc06faaed17fdcec21eb2b5a16c52580b98788233 a07f00047816f870edf35023a7c244c297d5fa27a153d229cb4e9f63d75ef186: (25.458856945s)
	W0729 19:30:41.257379 1104540 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240 d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9 6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3 24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2 9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70 f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916 64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93 ffdf933463225ce8f572adc07361432d655f0419c580c27942c268967dd59cc3 73edd49da2e89228a2df0c3f9ea29773838aa939c171609beed04a3240e1da13 22c2a521e3501e9894de6c6abd25fa0a4e820ffa0d653bb72a1842949248d951 b8066a8d42cd2c317be9c5cfb5ff41e6f8c223d4360d3978699a4b543f355bb9 a53b6c
de54af9f46243d4e586b02b6bfc7c8665eae6caf8a59a3d86c42f0ac2e 97e35faf441e63ce273c2f9bc06faaed17fdcec21eb2b5a16c52580b98788233 a07f00047816f870edf35023a7c244c297d5fa27a153d229cb4e9f63d75ef186: Process exited with status 1
	stdout:
	a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240
	d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9
	6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3
	24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2
	9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70
	f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916
	64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b
	e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc
	
	stderr:
	E0729 19:30:41.213301    3256 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93\": container with ID starting with c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93 not found: ID does not exist" containerID="c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93"
	time="2024-07-29T19:30:41Z" level=fatal msg="stopping the container \"c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93\": rpc error: code = NotFound desc = could not find container \"c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93\": container with ID starting with c27fedd1c101196a7896e6b128d8e794781ddcb4781c493eeecb061bf89a3e93 not found: ID does not exist"
	I0729 19:30:41.257462 1104540 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:30:41.297699 1104540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:30:41.309193 1104540 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 29 19:29 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul 29 19:29 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Jul 29 19:29 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul 29 19:29 /etc/kubernetes/scheduler.conf
	
	I0729 19:30:41.309274 1104540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:30:41.318820 1104540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:30:41.328188 1104540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:30:41.338212 1104540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:30:41.338286 1104540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:30:41.348245 1104540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:30:41.358141 1104540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:30:41.358210 1104540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:30:41.367641 1104540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:30:41.377191 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:41.430976 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:42.394743 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:42.651859 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:42.721325 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:42.791756 1104540 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:30:42.791874 1104540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:30:43.292729 1104540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:30:43.792313 1104540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:30:43.815697 1104540 api_server.go:72] duration metric: took 1.023936965s to wait for apiserver process to appear ...
	I0729 19:30:43.815736 1104540 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:30:43.815763 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:43.816436 1104540 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0729 19:30:44.316538 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:40.907730 1104946 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:30:40.909237 1104946 main.go:141] libmachine: (pause-464015) DBG | unable to find current IP address of domain pause-464015 in network mk-pause-464015
	I0729 19:30:40.909261 1104946 main.go:141] libmachine: (pause-464015) DBG | I0729 19:30:40.908239 1104968 retry.go:31] will retry after 2.812425248s: waiting for machine to come up
	I0729 19:30:43.724129 1104946 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:30:43.724648 1104946 main.go:141] libmachine: (pause-464015) DBG | unable to find current IP address of domain pause-464015 in network mk-pause-464015
	I0729 19:30:43.724671 1104946 main.go:141] libmachine: (pause-464015) DBG | I0729 19:30:43.724591 1104968 retry.go:31] will retry after 2.719284237s: waiting for machine to come up
	I0729 19:30:40.946402 1105206 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:30:40.946456 1105206 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:30:40.946472 1105206 cache.go:56] Caching tarball of preloaded images
	I0729 19:30:40.946581 1105206 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:30:40.946595 1105206 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:30:40.946733 1105206 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/config.json ...
	I0729 19:30:40.946764 1105206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/config.json: {Name:mk7b11748a199de5866552bdce92aaea9701e7df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:30:40.946974 1105206 start.go:360] acquireMachinesLock for auto-184620: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:30:46.732623 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:30:46.732652 1104540 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:30:46.732666 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:46.765395 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:30:46.765435 1104540 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:30:46.816560 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:46.855761 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:30:46.855792 1104540 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:30:47.316038 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:47.324009 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:30:47.324040 1104540 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:30:47.816669 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:47.820756 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:30:47.820779 1104540 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:30:48.316322 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:48.320677 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0729 19:30:48.327296 1104540 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:30:48.327340 1104540 api_server.go:131] duration metric: took 4.511595202s to wait for apiserver health ...
	I0729 19:30:48.327350 1104540 cni.go:84] Creating CNI manager for ""
	I0729 19:30:48.327356 1104540 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:30:48.329378 1104540 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:30:48.330702 1104540 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:30:48.344168 1104540 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:30:48.363218 1104540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:30:48.373071 1104540 system_pods.go:59] 8 kube-system pods found
	I0729 19:30:48.373114 1104540 system_pods.go:61] "coredns-5cfdc65f69-2nz9d" [b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c] Running
	I0729 19:30:48.373128 1104540 system_pods.go:61] "coredns-5cfdc65f69-bfchc" [384931ec-91fb-4c6c-9699-6de33c1e93df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:30:48.373138 1104540 system_pods.go:61] "etcd-kubernetes-upgrade-261955" [a4a04ed1-e56f-4755-937b-ad59bf039073] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:30:48.373151 1104540 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-261955" [cf39c9e3-41a2-489c-bdbe-43d5d68be747] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:30:48.373159 1104540 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-261955" [b83341af-5f1f-4ecb-b319-734fe0050738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:30:48.373166 1104540 system_pods.go:61] "kube-proxy-4ql2b" [960a2030-354a-412c-ab1f-83d63042c16c] Running
	I0729 19:30:48.373176 1104540 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-261955" [bc00a3fb-16d5-43f1-bbcc-02de6691cc1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:30:48.373181 1104540 system_pods.go:61] "storage-provisioner" [edc791b1-4081-48da-b92e-5dbd89f75e8c] Running
	I0729 19:30:48.373190 1104540 system_pods.go:74] duration metric: took 9.952003ms to wait for pod list to return data ...
	I0729 19:30:48.373200 1104540 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:30:48.376837 1104540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:30:48.376860 1104540 node_conditions.go:123] node cpu capacity is 2
	I0729 19:30:48.376869 1104540 node_conditions.go:105] duration metric: took 3.661196ms to run NodePressure ...
	I0729 19:30:48.376889 1104540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:30:48.684231 1104540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:30:48.696022 1104540 ops.go:34] apiserver oom_adj: -16
	I0729 19:30:48.696048 1104540 kubeadm.go:597] duration metric: took 33.003521934s to restartPrimaryControlPlane
	I0729 19:30:48.696058 1104540 kubeadm.go:394] duration metric: took 33.167815667s to StartCluster
	I0729 19:30:48.696081 1104540 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:30:48.696151 1104540 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:30:48.696997 1104540 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:30:48.697276 1104540 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:30:48.697368 1104540 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:30:48.697439 1104540 config.go:182] Loaded profile config "kubernetes-upgrade-261955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:30:48.697441 1104540 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-261955"
	I0729 19:30:48.697517 1104540 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-261955"
	W0729 19:30:48.697525 1104540 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:30:48.697555 1104540 host.go:66] Checking if "kubernetes-upgrade-261955" exists ...
	I0729 19:30:48.697450 1104540 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-261955"
	I0729 19:30:48.697712 1104540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-261955"
	I0729 19:30:48.697908 1104540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:30:48.697943 1104540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:30:48.698051 1104540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:30:48.698089 1104540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:30:48.698891 1104540 out.go:177] * Verifying Kubernetes components...
	I0729 19:30:48.700034 1104540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:30:48.713355 1104540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I0729 19:30:48.713778 1104540 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:30:48.714341 1104540 main.go:141] libmachine: Using API Version  1
	I0729 19:30:48.714376 1104540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:30:48.714747 1104540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:30:48.715382 1104540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:30:48.715413 1104540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:30:48.715924 1104540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0729 19:30:48.716366 1104540 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:30:48.716816 1104540 main.go:141] libmachine: Using API Version  1
	I0729 19:30:48.716847 1104540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:30:48.717210 1104540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:30:48.717422 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetState
	I0729 19:30:48.719947 1104540 kapi.go:59] client config for kubernetes-upgrade-261955: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kubernetes-upgrade-261955/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uin
t8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d03460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 19:30:48.720300 1104540 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-261955"
	W0729 19:30:48.720320 1104540 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:30:48.720357 1104540 host.go:66] Checking if "kubernetes-upgrade-261955" exists ...
	I0729 19:30:48.720739 1104540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:30:48.720774 1104540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:30:48.731021 1104540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43967
	I0729 19:30:48.731544 1104540 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:30:48.732057 1104540 main.go:141] libmachine: Using API Version  1
	I0729 19:30:48.732076 1104540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:30:48.732493 1104540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:30:48.732682 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetState
	I0729 19:30:48.734732 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:30:48.736835 1104540 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:30:48.737634 1104540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0729 19:30:48.738010 1104540 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:30:48.738315 1104540 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:30:48.738334 1104540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:30:48.738354 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:30:48.738532 1104540 main.go:141] libmachine: Using API Version  1
	I0729 19:30:48.738543 1104540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:30:48.738898 1104540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:30:48.739686 1104540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:30:48.739722 1104540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:30:48.741623 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:30:48.742015 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:30:48.742037 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:30:48.742127 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:30:48.742304 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:30:48.742492 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:30:48.742672 1104540 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:30:48.755336 1104540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37419
	I0729 19:30:48.755778 1104540 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:30:48.756212 1104540 main.go:141] libmachine: Using API Version  1
	I0729 19:30:48.756224 1104540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:30:48.756664 1104540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:30:48.756830 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetState
	I0729 19:30:48.758225 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .DriverName
	I0729 19:30:48.758446 1104540 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:30:48.758458 1104540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:30:48.758472 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHHostname
	I0729 19:30:48.761592 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:30:48.762007 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:4f:43", ip: ""} in network mk-kubernetes-upgrade-261955: {Iface:virbr1 ExpiryTime:2024-07-29 20:24:52 +0000 UTC Type:0 Mac:52:54:00:00:4f:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:kubernetes-upgrade-261955 Clientid:01:52:54:00:00:4f:43}
	I0729 19:30:48.762035 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | domain kubernetes-upgrade-261955 has defined IP address 192.168.39.144 and MAC address 52:54:00:00:4f:43 in network mk-kubernetes-upgrade-261955
	I0729 19:30:48.762236 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHPort
	I0729 19:30:48.762421 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHKeyPath
	I0729 19:30:48.762573 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .GetSSHUsername
	I0729 19:30:48.762725 1104540 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/kubernetes-upgrade-261955/id_rsa Username:docker}
	I0729 19:30:48.925489 1104540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:30:48.947974 1104540 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:30:48.948067 1104540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:30:48.962751 1104540 api_server.go:72] duration metric: took 265.424706ms to wait for apiserver process to appear ...
	I0729 19:30:48.962783 1104540 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:30:48.962822 1104540 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0729 19:30:48.971534 1104540 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0729 19:30:48.972478 1104540 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:30:48.972498 1104540 api_server.go:131] duration metric: took 9.709648ms to wait for apiserver health ...
	I0729 19:30:48.972505 1104540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:30:48.979769 1104540 system_pods.go:59] 8 kube-system pods found
	I0729 19:30:48.979795 1104540 system_pods.go:61] "coredns-5cfdc65f69-2nz9d" [b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c] Running
	I0729 19:30:48.979805 1104540 system_pods.go:61] "coredns-5cfdc65f69-bfchc" [384931ec-91fb-4c6c-9699-6de33c1e93df] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:30:48.979815 1104540 system_pods.go:61] "etcd-kubernetes-upgrade-261955" [a4a04ed1-e56f-4755-937b-ad59bf039073] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:30:48.979827 1104540 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-261955" [cf39c9e3-41a2-489c-bdbe-43d5d68be747] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:30:48.979844 1104540 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-261955" [b83341af-5f1f-4ecb-b319-734fe0050738] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:30:48.979852 1104540 system_pods.go:61] "kube-proxy-4ql2b" [960a2030-354a-412c-ab1f-83d63042c16c] Running
	I0729 19:30:48.979859 1104540 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-261955" [bc00a3fb-16d5-43f1-bbcc-02de6691cc1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:30:48.979867 1104540 system_pods.go:61] "storage-provisioner" [edc791b1-4081-48da-b92e-5dbd89f75e8c] Running
	I0729 19:30:48.979878 1104540 system_pods.go:74] duration metric: took 7.367058ms to wait for pod list to return data ...
	I0729 19:30:48.979893 1104540 kubeadm.go:582] duration metric: took 282.583622ms to wait for: map[apiserver:true system_pods:true]
	I0729 19:30:48.979908 1104540 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:30:48.982675 1104540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:30:48.982695 1104540 node_conditions.go:123] node cpu capacity is 2
	I0729 19:30:48.982705 1104540 node_conditions.go:105] duration metric: took 2.792168ms to run NodePressure ...
	I0729 19:30:48.982719 1104540 start.go:241] waiting for startup goroutines ...
	I0729 19:30:49.020234 1104540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:30:49.042864 1104540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:30:49.696906 1104540 main.go:141] libmachine: Making call to close driver server
	I0729 19:30:49.696935 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Close
	I0729 19:30:49.696969 1104540 main.go:141] libmachine: Making call to close driver server
	I0729 19:30:49.696980 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Close
	I0729 19:30:49.697278 1104540 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:30:49.697302 1104540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:30:49.697314 1104540 main.go:141] libmachine: Making call to close driver server
	I0729 19:30:49.697322 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Close
	I0729 19:30:49.697345 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Closing plugin on server side
	I0729 19:30:49.697355 1104540 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:30:49.697372 1104540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:30:49.697351 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Closing plugin on server side
	I0729 19:30:49.697380 1104540 main.go:141] libmachine: Making call to close driver server
	I0729 19:30:49.697497 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Close
	I0729 19:30:49.697517 1104540 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:30:49.697537 1104540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:30:49.697701 1104540 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:30:49.697719 1104540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:30:49.697736 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Closing plugin on server side
	I0729 19:30:49.704759 1104540 main.go:141] libmachine: Making call to close driver server
	I0729 19:30:49.704791 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) Calling .Close
	I0729 19:30:49.705031 1104540 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:30:49.705050 1104540 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:30:49.705052 1104540 main.go:141] libmachine: (kubernetes-upgrade-261955) DBG | Closing plugin on server side
	I0729 19:30:49.707612 1104540 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 19:30:49.708663 1104540 addons.go:510] duration metric: took 1.011294434s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 19:30:49.708703 1104540 start.go:246] waiting for cluster config update ...
	I0729 19:30:49.708718 1104540 start.go:255] writing updated cluster config ...
	I0729 19:30:49.709013 1104540 ssh_runner.go:195] Run: rm -f paused
	I0729 19:30:49.760974 1104540 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:30:49.762567 1104540 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-261955" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.546283474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281450546258244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8463732c-8667-4f27-92f1-591bab839f00 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.546681448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ce4eed7-1092-463c-844a-f5aff20b23c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.546733834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ce4eed7-1092-463c-844a-f5aff20b23c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.547128192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4291806caef96bd88b59395c8354db7b6acd0d67f25bf5c036bd02936a2496d4,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722281447081587947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e8b46860a342cb12168f4e0cc2b43e98986f43a8e705a323008dfa4ccb93ba,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281447106277771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:342c943d2b39e35bd4e6fe1ce6ef79ea19278afb2ae5d5b0925462acf8fe108a,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722281447094866994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8defda06b83537d6f486d23c2ba2bcbf4b4a91b8d646b8c43594e267e545491,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722281443431612600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b2
5217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02995bdeebfaf92fadc0a87af9aa91a74a88eb4e2495f881e86c7a5f7fcb079,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722281443454355669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544
,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5573f7190a77aa31efd03c77ebe67df60037ca923085bbc7566df0f34ccb3c0,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722281443456306681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},An
notations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935586b3c0b77a4d547aeae43259230f6fcc56e3dca9fe677c2c722fa46e037c,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722281443423188075,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44
e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c57c579305e9dd16655068e0864a84fb4f09149a673dbe4741726e1fe9508f,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281435956743927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722281413684472785,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414701247400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414648936793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722281413616200851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b25217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722281413579436579,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722281413514775175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722281413543733005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722281413446409387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ce4eed7-1092-463c-844a-f5aff20b23c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.589463793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0072d48c-14cc-4d9b-8344-c5e726554004 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.589552036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0072d48c-14cc-4d9b-8344-c5e726554004 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.590760133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ac36bfa-1267-4667-9986-5eb824b96b9f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.591429746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281450591345164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ac36bfa-1267-4667-9986-5eb824b96b9f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.592685284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfdea2b4-8272-4bd1-b769-f72713feea43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.592758768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfdea2b4-8272-4bd1-b769-f72713feea43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.593172056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4291806caef96bd88b59395c8354db7b6acd0d67f25bf5c036bd02936a2496d4,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722281447081587947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e8b46860a342cb12168f4e0cc2b43e98986f43a8e705a323008dfa4ccb93ba,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281447106277771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:342c943d2b39e35bd4e6fe1ce6ef79ea19278afb2ae5d5b0925462acf8fe108a,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722281447094866994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8defda06b83537d6f486d23c2ba2bcbf4b4a91b8d646b8c43594e267e545491,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722281443431612600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b2
5217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02995bdeebfaf92fadc0a87af9aa91a74a88eb4e2495f881e86c7a5f7fcb079,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722281443454355669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544
,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5573f7190a77aa31efd03c77ebe67df60037ca923085bbc7566df0f34ccb3c0,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722281443456306681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},An
notations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935586b3c0b77a4d547aeae43259230f6fcc56e3dca9fe677c2c722fa46e037c,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722281443423188075,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44
e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c57c579305e9dd16655068e0864a84fb4f09149a673dbe4741726e1fe9508f,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281435956743927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722281413684472785,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414701247400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414648936793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722281413616200851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b25217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722281413579436579,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722281413514775175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722281413543733005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722281413446409387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfdea2b4-8272-4bd1-b769-f72713feea43 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.639086671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45c61a4c-54c1-44d3-ba2f-b069182fb009 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.639158607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45c61a4c-54c1-44d3-ba2f-b069182fb009 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.640226727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9edffe44-a534-41de-a287-1f11c4d39e90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.640653955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281450640628408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9edffe44-a534-41de-a287-1f11c4d39e90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.641377107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6963a9b-2f11-4850-8df1-e9de55dd711b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.641452204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6963a9b-2f11-4850-8df1-e9de55dd711b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.641937015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4291806caef96bd88b59395c8354db7b6acd0d67f25bf5c036bd02936a2496d4,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722281447081587947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e8b46860a342cb12168f4e0cc2b43e98986f43a8e705a323008dfa4ccb93ba,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281447106277771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:342c943d2b39e35bd4e6fe1ce6ef79ea19278afb2ae5d5b0925462acf8fe108a,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722281447094866994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8defda06b83537d6f486d23c2ba2bcbf4b4a91b8d646b8c43594e267e545491,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722281443431612600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b2
5217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02995bdeebfaf92fadc0a87af9aa91a74a88eb4e2495f881e86c7a5f7fcb079,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722281443454355669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544
,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5573f7190a77aa31efd03c77ebe67df60037ca923085bbc7566df0f34ccb3c0,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722281443456306681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},An
notations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935586b3c0b77a4d547aeae43259230f6fcc56e3dca9fe677c2c722fa46e037c,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722281443423188075,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44
e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c57c579305e9dd16655068e0864a84fb4f09149a673dbe4741726e1fe9508f,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281435956743927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722281413684472785,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414701247400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414648936793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722281413616200851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b25217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722281413579436579,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722281413514775175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722281413543733005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722281413446409387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6963a9b-2f11-4850-8df1-e9de55dd711b name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.700080475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5f148fb-836e-47ad-a7e6-29bc5c5bf9c0 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.700157375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5f148fb-836e-47ad-a7e6-29bc5c5bf9c0 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.702275253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86803d20-a75e-4625-a19f-22fe8a3b12b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.702649321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281450702622560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86803d20-a75e-4625-a19f-22fe8a3b12b3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.703655415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73da9c3c-63b5-4317-bf96-af80d1170e74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.703938088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73da9c3c-63b5-4317-bf96-af80d1170e74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:30:50 kubernetes-upgrade-261955 crio[2287]: time="2024-07-29 19:30:50.707110648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4291806caef96bd88b59395c8354db7b6acd0d67f25bf5c036bd02936a2496d4,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722281447081587947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11e8b46860a342cb12168f4e0cc2b43e98986f43a8e705a323008dfa4ccb93ba,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281447106277771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:342c943d2b39e35bd4e6fe1ce6ef79ea19278afb2ae5d5b0925462acf8fe108a,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722281447094866994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8defda06b83537d6f486d23c2ba2bcbf4b4a91b8d646b8c43594e267e545491,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722281443431612600,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b2
5217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f02995bdeebfaf92fadc0a87af9aa91a74a88eb4e2495f881e86c7a5f7fcb079,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722281443454355669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544
,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5573f7190a77aa31efd03c77ebe67df60037ca923085bbc7566df0f34ccb3c0,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722281443456306681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},An
notations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:935586b3c0b77a4d547aeae43259230f6fcc56e3dca9fe677c2c722fa46e037c,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722281443423188075,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44
e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c57c579305e9dd16655068e0864a84fb4f09149a673dbe4741726e1fe9508f,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281435956743927,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations
:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3,PodSandboxId:0bccad3e03693fad6305b5a915c48219be7ff1164f551e1383691960e6b76f1e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722281413684472785,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4ql2b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 960a2030-354a-412c-ab1f-83d63042c16c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240,PodSandboxId:39a92ddb6d76c8a634acc47eb50f4bed88f2d9be77d17dbd54d03eaa75ec7acb,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414701247400,Labels:map[string]string{io.kubernetes.container.name: coredns,io.
kubernetes.pod.name: coredns-5cfdc65f69-2nz9d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9699e3f-8f2a-42bb-9a01-b1b7b384ac1c,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9,PodSandboxId:6616b24d57519b2db148d5da0a4b3c8a4aa0d399ab03693a33a2fd4b263a1f1f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281414648936793,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bfchc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 384931ec-91fb-4c6c-9699-6de33c1e93df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2,PodSandboxId:0984307be3b8b2e4668fa1e4de476fed552b82bfda8dc565bd4833ba26e24677,Metadata:&Contai
nerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722281413616200851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbac91d755eec507cebe5ff6b25217b7,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70,PodSandboxId:03844bda210b913ca4205d8d3ca7f08a262f6508e66fae0c7dbb71f9c74e0513,Metadata:&ContainerMetadata{Name:kube-apiserver,Attem
pt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722281413579436579,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 001d531bb66951d5c2d45bc8ea18a544,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b,PodSandboxId:612cf6f7f7a456855dc9b7e18df95db8bb3261224c39ab2a3e4dd9bd4911ff71,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,}
,Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722281413514775175,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ebea8cc19b9f2e4f6e750007f95d91f,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916,PodSandboxId:396aa0e20fad06fd8f7cab87ff664a6e9c6251cb5200fe834933e5c6756c1690,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:
1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722281413543733005,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-261955,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed46068d7092c44e403c51173b0299d3,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc,PodSandboxId:c397c559097111521d44359f24789c4aab02628c6a27811a614b0e031739b1b3,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722281413446409387,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edc791b1-4081-48da-b92e-5dbd89f75e8c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=73da9c3c-63b5-4317-bf96-af80d1170e74 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	11e8b46860a34       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   6616b24d57519       coredns-5cfdc65f69-bfchc
	342c943d2b39e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   c397c55909711       storage-provisioner
	4291806caef96       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   3 seconds ago       Running             kube-proxy                2                   0bccad3e03693       kube-proxy-4ql2b
	f5573f7190a77       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   612cf6f7f7a45       kube-scheduler-kubernetes-upgrade-261955
	f02995bdeebfa       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   03844bda210b9       kube-apiserver-kubernetes-upgrade-261955
	c8defda06b835       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      2                   0984307be3b8b       etcd-kubernetes-upgrade-261955
	935586b3c0b77       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   396aa0e20fad0       kube-controller-manager-kubernetes-upgrade-261955
	a3c57c579305e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 seconds ago      Running             coredns                   2                   39a92ddb6d76c       coredns-5cfdc65f69-2nz9d
	a00fa01afdadb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   39a92ddb6d76c       coredns-5cfdc65f69-2nz9d
	d5d009697c8a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   6616b24d57519       coredns-5cfdc65f69-bfchc
	6a1f331c2a084       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   37 seconds ago      Exited              kube-proxy                1                   0bccad3e03693       kube-proxy-4ql2b
	24ba621fa423e       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   37 seconds ago      Exited              etcd                      1                   0984307be3b8b       etcd-kubernetes-upgrade-261955
	9c7c1af8f27fa       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   37 seconds ago      Exited              kube-apiserver            1                   03844bda210b9       kube-apiserver-kubernetes-upgrade-261955
	f067d3f10c5fc       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   37 seconds ago      Exited              kube-controller-manager   1                   396aa0e20fad0       kube-controller-manager-kubernetes-upgrade-261955
	64108efd83e9d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   37 seconds ago      Exited              kube-scheduler            1                   612cf6f7f7a45       kube-scheduler-kubernetes-upgrade-261955
	e8dd3af336e34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   37 seconds ago      Exited              storage-provisioner       1                   c397c55909711       storage-provisioner
	
	
	==> coredns [11e8b46860a342cb12168f4e0cc2b43e98986f43a8e705a323008dfa4ccb93ba] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a00fa01afdadb3604052e3dc1d5449fa858d6d3297945c561c408eb98a207240] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3c57c579305e9dd16655068e0864a84fb4f09149a673dbe4741726e1fe9508f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	
	
	==> coredns [d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-261955
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-261955
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:29:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-261955
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:30:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:30:46 +0000   Mon, 29 Jul 2024 19:29:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:30:46 +0000   Mon, 29 Jul 2024 19:29:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:30:46 +0000   Mon, 29 Jul 2024 19:29:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:30:46 +0000   Mon, 29 Jul 2024 19:29:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    kubernetes-upgrade-261955
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb59dbf04ccd4fc5a864b39c8f0e1be9
	  System UUID:                eb59dbf0-4ccd-4fc5-a864-b39c8f0e1be9
	  Boot ID:                    e9726ec7-ffc8-4e81-903b-220ebbfdf79e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-2nz9d                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 coredns-5cfdc65f69-bfchc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-kubernetes-upgrade-261955                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         66s
	  kube-system                 kube-apiserver-kubernetes-upgrade-261955             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-261955    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kube-proxy-4ql2b                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-kubernetes-upgrade-261955             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 33s                kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 71s)  kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     70s (x7 over 71s)  kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    70s (x8 over 71s)  kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           59s                node-controller  Node kubernetes-upgrade-261955 event: Registered Node kubernetes-upgrade-261955 in Controller
	  Normal  RegisteredNode           30s                node-controller  Node kubernetes-upgrade-261955 event: Registered Node kubernetes-upgrade-261955 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)    kubelet          Node kubernetes-upgrade-261955 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.339270] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.062285] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065223] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.199488] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.139078] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.344029] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +4.452554] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +0.074057] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.274742] systemd-fstab-generator[859]: Ignoring "noauto" option for root device
	[  +7.255130] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.143703] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.058322] kauditd_printk_skb: 18 callbacks suppressed
	[Jul29 19:30] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[  +0.090014] kauditd_printk_skb: 81 callbacks suppressed
	[  +0.055122] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.177943] systemd-fstab-generator[2233]: Ignoring "noauto" option for root device
	[  +0.150610] systemd-fstab-generator[2245]: Ignoring "noauto" option for root device
	[  +0.288078] systemd-fstab-generator[2273]: Ignoring "noauto" option for root device
	[  +2.631989] systemd-fstab-generator[3032]: Ignoring "noauto" option for root device
	[  +3.624762] kauditd_printk_skb: 228 callbacks suppressed
	[ +24.468450] systemd-fstab-generator[3645]: Ignoring "noauto" option for root device
	[  +4.675698] kauditd_printk_skb: 44 callbacks suppressed
	[  +1.582942] systemd-fstab-generator[4150]: Ignoring "noauto" option for root device
	
	
	==> etcd [24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2] <==
	{"level":"info","ts":"2024-07-29T19:30:15.603154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:30:15.603176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2024-07-29T19:30:15.603313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:15.603418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:15.603431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:15.60344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:15.605604Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:kubernetes-upgrade-261955 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:30:15.607128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:30:15.609264Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:30:15.611626Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:30:15.617662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2024-07-29T19:30:15.628647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:30:15.642232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:30:15.640761Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:30:15.64244Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:30:30.60654Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T19:30:30.606638Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-261955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"]}
	{"level":"warn","ts":"2024-07-29T19:30:30.60673Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:30:30.606893Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:30:30.676938Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.144:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:30:30.676994Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.144:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T19:30:30.677045Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"42163c43c38ae515","current-leader-member-id":"42163c43c38ae515"}
	{"level":"info","ts":"2024-07-29T19:30:30.68292Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-07-29T19:30:30.683374Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-07-29T19:30:30.68347Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-261955","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"]}
	
	
	==> etcd [c8defda06b83537d6f486d23c2ba2bcbf4b4a91b8d646b8c43594e267e545491] <==
	{"level":"info","ts":"2024-07-29T19:30:44.042256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","added-peer-id":"42163c43c38ae515","added-peer-peer-urls":["https://192.168.39.144:2380"]}
	{"level":"info","ts":"2024-07-29T19:30:44.042359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:30:44.042403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:30:44.04967Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:30:44.059394Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-07-29T19:30:44.061839Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2024-07-29T19:30:44.059311Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:30:44.067006Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:30:44.066942Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"42163c43c38ae515","initial-advertise-peer-urls":["https://192.168.39.144:2380"],"listen-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:30:45.397031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:45.397101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:45.397119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2024-07-29T19:30:45.397131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T19:30:45.397145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 4"}
	{"level":"info","ts":"2024-07-29T19:30:45.397155Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T19:30:45.397164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 4"}
	{"level":"info","ts":"2024-07-29T19:30:45.402913Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:kubernetes-upgrade-261955 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:30:45.402963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:30:45.403171Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:30:45.4032Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:30:45.403246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:30:45.403937Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:30:45.403938Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:30:45.404713Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2024-07-29T19:30:45.405286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:30:51 up 1 min,  0 users,  load average: 2.73, 0.79, 0.28
	Linux kubernetes-upgrade-261955 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70] <==
	W0729 19:30:40.003101       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.041002       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.084887       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.092331       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.115710       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.168378       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.191527       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.291477       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.299393       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.317256       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.340333       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.342686       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.403226       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.409083       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.433519       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.507937       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.532877       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.551924       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.570294       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.595955       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.623453       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.687192       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.727220       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.737176       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:30:40.767866       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f02995bdeebfaf92fadc0a87af9aa91a74a88eb4e2495f881e86c7a5f7fcb079] <==
	I0729 19:30:46.824368       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 19:30:46.825774       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 19:30:46.825914       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 19:30:46.825942       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 19:30:46.826337       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 19:30:46.831210       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:30:46.831297       1 policy_source.go:224] refreshing policies
	I0729 19:30:46.832712       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 19:30:46.835467       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 19:30:46.835887       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 19:30:46.836033       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 19:30:46.837447       1 aggregator.go:171] initial CRD sync complete...
	I0729 19:30:46.837477       1 autoregister_controller.go:144] Starting autoregister controller
	I0729 19:30:46.837500       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 19:30:46.837523       1 cache.go:39] Caches are synced for autoregister controller
	E0729 19:30:46.847635       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 19:30:46.927195       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 19:30:47.632425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 19:30:48.440966       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 19:30:48.451615       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 19:30:48.491430       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 19:30:48.624623       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 19:30:48.634910       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 19:30:49.552484       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 19:30:51.183308       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [935586b3c0b77a4d547aeae43259230f6fcc56e3dca9fe677c2c722fa46e037c] <==
	I0729 19:30:50.368258       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 19:30:50.448565       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 19:30:50.458637       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="18.917728ms"
	I0729 19:30:50.458742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="54.685µs"
	I0729 19:30:50.518854       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 19:30:50.535748       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 19:30:50.618533       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 19:30:50.625168       1 shared_informer.go:320] Caches are synced for disruption
	I0729 19:30:50.669707       1 shared_informer.go:320] Caches are synced for deployment
	I0729 19:30:51.052965       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0729 19:30:51.120286       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 19:30:51.120391       1 shared_informer.go:320] Caches are synced for taint
	I0729 19:30:51.120545       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 19:30:51.120642       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-261955"
	I0729 19:30:51.120694       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 19:30:51.155674       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 19:30:51.164001       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:30:51.164019       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 19:30:51.174460       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 19:30:51.174525       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-261955"
	I0729 19:30:51.177258       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:30:51.199535       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:30:51.219183       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 19:30:51.223921       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 19:30:51.229898       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916] <==
	I0729 19:30:21.700469       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-261955"
	I0729 19:30:21.730923       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 19:30:21.733963       1 shared_informer.go:320] Caches are synced for expand
	I0729 19:30:21.744064       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 19:30:21.745914       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 19:30:21.747025       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 19:30:21.747293       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 19:30:21.750600       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 19:30:21.775738       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 19:30:21.897346       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 19:30:21.897462       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 19:30:21.897481       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 19:30:21.897647       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 19:30:21.897493       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 19:30:21.940241       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 19:30:21.947621       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 19:30:21.948881       1 shared_informer.go:320] Caches are synced for job
	I0729 19:30:21.962270       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:30:21.962370       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 19:30:21.966676       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:30:21.972293       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 19:30:21.998669       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 19:30:22.005511       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:30:22.014503       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:30:26.315502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="64.206µs"
	
	
	==> kube-proxy [4291806caef96bd88b59395c8354db7b6acd0d67f25bf5c036bd02936a2496d4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:30:47.344271       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:30:47.352388       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	E0729 19:30:47.352432       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:30:47.385312       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:30:47.385357       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:30:47.385379       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:30:47.387868       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:30:47.388218       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:30:47.388245       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:30:47.391158       1 config.go:197] "Starting service config controller"
	I0729 19:30:47.391213       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:30:47.391231       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:30:47.391263       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:30:47.389899       1 config.go:326] "Starting node config controller"
	I0729 19:30:47.391349       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:30:47.492411       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:30:47.492459       1 shared_informer.go:320] Caches are synced for node config
	I0729 19:30:47.492650       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:30:16.289975       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:30:17.583255       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.144"]
	E0729 19:30:17.583423       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:30:17.776896       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:30:17.777214       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:30:17.777982       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:30:17.786731       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:30:17.787170       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:30:17.787213       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:30:17.788750       1 config.go:197] "Starting service config controller"
	I0729 19:30:17.788849       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:30:17.788875       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:30:17.788919       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:30:17.792214       1 config.go:326] "Starting node config controller"
	I0729 19:30:17.792308       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:30:17.890958       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:30:17.891012       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:30:17.892770       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b] <==
	I0729 19:30:15.831428       1 serving.go:386] Generated self-signed cert in-memory
	I0729 19:30:17.630382       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 19:30:17.630425       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:30:17.650291       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:30:17.650394       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0729 19:30:17.650450       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0729 19:30:17.650472       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 19:30:17.654241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:30:17.654299       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:30:17.654331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0729 19:30:17.654340       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 19:30:17.750537       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0729 19:30:17.756133       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0729 19:30:17.760641       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:30:41.051101       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 19:30:41.051212       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0729 19:30:41.051349       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f5573f7190a77aa31efd03c77ebe67df60037ca923085bbc7566df0f34ccb3c0] <==
	I0729 19:30:44.562426       1 serving.go:386] Generated self-signed cert in-memory
	W0729 19:30:46.704771       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:30:46.705013       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:30:46.705101       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:30:46.705126       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:30:46.742461       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 19:30:46.742500       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:30:46.749188       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:30:46.749283       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:30:46.749416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 19:30:46.749508       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:30:46.849650       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: E0729 19:30:43.383749    3652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-261955?timeout=10s\": dial tcp 192.168.39.144:8443: connect: connection refused" interval="800ms"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:43.404108    3652 scope.go:117] "RemoveContainer" containerID="f067d3f10c5fc23704253cb0f9a38b45d3336c493097fe4b0d0d05f510fbf916"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:43.409142    3652 scope.go:117] "RemoveContainer" containerID="24ba621fa423e69b1a9426748eed270037ed3f09af4f0a906e3fb20da321f1a2"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:43.410681    3652 scope.go:117] "RemoveContainer" containerID="9c7c1af8f27fa88061afed3ef14704472a0005e540c884668375cc1761f01b70"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:43.411311    3652 scope.go:117] "RemoveContainer" containerID="64108efd83e9d0fbb1146b280f0bb34109dce11ecd1a69ded6a05f82a1f6190b"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:43.476612    3652 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-261955"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: E0729 19:30:43.478035    3652 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.144:8443: connect: connection refused" node="kubernetes-upgrade-261955"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: W0729 19:30:43.804112    3652 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.144:8443: connect: connection refused
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: E0729 19:30:43.804204    3652 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.144:8443: connect: connection refused" logger="UnhandledError"
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: W0729 19:30:43.804112    3652 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.144:8443: connect: connection refused
	Jul 29 19:30:43 kubernetes-upgrade-261955 kubelet[3652]: E0729 19:30:43.804251    3652 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.144:8443: connect: connection refused" logger="UnhandledError"
	Jul 29 19:30:44 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:44.280049    3652 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-261955"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.744876    3652 apiserver.go:52] "Watching apiserver"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.770300    3652 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.823586    3652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/edc791b1-4081-48da-b92e-5dbd89f75e8c-tmp\") pod \"storage-provisioner\" (UID: \"edc791b1-4081-48da-b92e-5dbd89f75e8c\") " pod="kube-system/storage-provisioner"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.823669    3652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/960a2030-354a-412c-ab1f-83d63042c16c-lib-modules\") pod \"kube-proxy-4ql2b\" (UID: \"960a2030-354a-412c-ab1f-83d63042c16c\") " pod="kube-system/kube-proxy-4ql2b"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.823871    3652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/960a2030-354a-412c-ab1f-83d63042c16c-xtables-lock\") pod \"kube-proxy-4ql2b\" (UID: \"960a2030-354a-412c-ab1f-83d63042c16c\") " pod="kube-system/kube-proxy-4ql2b"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.859646    3652 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-261955"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.859882    3652 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-261955"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.860051    3652 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 19:30:46 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:46.861144    3652 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 19:30:47 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:47.057007    3652 scope.go:117] "RemoveContainer" containerID="6a1f331c2a084aad84a7d4ab3bfaf58d5788bfd3dbc437057a7bcc29f52ef3a3"
	Jul 29 19:30:47 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:47.057482    3652 scope.go:117] "RemoveContainer" containerID="e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc"
	Jul 29 19:30:47 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:47.058948    3652 scope.go:117] "RemoveContainer" containerID="d5d009697c8a67c0f1d05f0c81f8e8feee8c3dec344cbc15a0d85e02e2db9ae9"
	Jul 29 19:30:50 kubernetes-upgrade-261955 kubelet[3652]: I0729 19:30:50.410178    3652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [342c943d2b39e35bd4e6fe1ce6ef79ea19278afb2ae5d5b0925462acf8fe108a] <==
	I0729 19:30:47.240653       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:30:47.258157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:30:47.258456       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [e8dd3af336e3466722d4ce752f435c3cc37758e07d260a8cc71e172b872dcabc] <==
	I0729 19:30:14.579074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:30:17.604172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:30:17.604280       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:30:17.648843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:30:17.659847       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-261955_74902d92-8c38-4f61-b853-3fe2ea97ed53!
	I0729 19:30:17.661763       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a8fefa4a-2175-437a-abf2-faa0d8c55461", APIVersion:"v1", ResourceVersion:"394", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-261955_74902d92-8c38-4f61-b853-3fe2ea97ed53 became leader
	I0729 19:30:17.760938       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-261955_74902d92-8c38-4f61-b853-3fe2ea97ed53!
	W0729 19:30:40.858903       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: watch of *v1.PersistentVolumeClaim ended with: very short watch: pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0729 19:30:40.858993       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: watch of *v1.StorageClass ended with: very short watch: pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	W0729 19:30:40.859056       1 reflector.go:436] pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: watch of *v1.PersistentVolume ended with: very short watch: pkg/mod/k8s.io/client-go@v0.20.5/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-261955 -n kubernetes-upgrade-261955
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-261955 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-261955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-261955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-261955: (1.103770736s)
--- FAIL: TestKubernetesUpgrade (375.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (92.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-464015 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-464015 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.799608107s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-464015] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-464015" primary control-plane node in "pause-464015" cluster
	* Updating the running kvm2 "pause-464015" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-464015" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:31:32.617541 1106090 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:31:32.618004 1106090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:31:32.618024 1106090 out.go:304] Setting ErrFile to fd 2...
	I0729 19:31:32.618031 1106090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:31:32.618490 1106090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:31:32.619274 1106090 out.go:298] Setting JSON to false
	I0729 19:31:32.620335 1106090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11645,"bootTime":1722269848,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:31:32.620394 1106090 start.go:139] virtualization: kvm guest
	I0729 19:31:32.622120 1106090 out.go:177] * [pause-464015] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:31:32.623878 1106090 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:31:32.623880 1106090 notify.go:220] Checking for updates...
	I0729 19:31:32.626338 1106090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:31:32.627598 1106090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:31:32.628991 1106090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:31:32.630109 1106090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:31:32.631316 1106090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:31:32.633195 1106090 config.go:182] Loaded profile config "pause-464015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:31:32.633799 1106090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:31:32.633861 1106090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:31:32.649271 1106090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0729 19:31:32.649682 1106090 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:31:32.650188 1106090 main.go:141] libmachine: Using API Version  1
	I0729 19:31:32.650208 1106090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:31:32.650561 1106090 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:31:32.650806 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:31:32.651053 1106090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:31:32.651368 1106090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:31:32.651407 1106090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:31:32.669059 1106090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I0729 19:31:32.669502 1106090 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:31:32.670026 1106090 main.go:141] libmachine: Using API Version  1
	I0729 19:31:32.670053 1106090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:31:32.670353 1106090 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:31:32.670556 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:31:32.707248 1106090 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:31:32.708535 1106090 start.go:297] selected driver: kvm2
	I0729 19:31:32.708552 1106090 start.go:901] validating driver "kvm2" against &{Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:31:32.708684 1106090 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:31:32.708994 1106090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:31:32.709083 1106090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:31:32.726571 1106090 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:31:32.727613 1106090 cni.go:84] Creating CNI manager for ""
	I0729 19:31:32.727636 1106090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:31:32.727720 1106090 start.go:340] cluster config:
	{Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-464015 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:31:32.727911 1106090 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:31:32.730565 1106090 out.go:177] * Starting "pause-464015" primary control-plane node in "pause-464015" cluster
	I0729 19:31:32.731737 1106090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:31:32.731777 1106090 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:31:32.731788 1106090 cache.go:56] Caching tarball of preloaded images
	I0729 19:31:32.731860 1106090 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:31:32.731873 1106090 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:31:32.732008 1106090 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/config.json ...
	I0729 19:31:32.732224 1106090 start.go:360] acquireMachinesLock for pause-464015: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:32:10.409658 1106090 start.go:364] duration metric: took 37.677404612s to acquireMachinesLock for "pause-464015"
	I0729 19:32:10.409734 1106090 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:32:10.409747 1106090 fix.go:54] fixHost starting: 
	I0729 19:32:10.410260 1106090 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:32:10.410319 1106090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:32:10.431708 1106090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I0729 19:32:10.432203 1106090 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:32:10.432879 1106090 main.go:141] libmachine: Using API Version  1
	I0729 19:32:10.432915 1106090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:32:10.434132 1106090 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:32:10.434403 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:10.434575 1106090 main.go:141] libmachine: (pause-464015) Calling .GetState
	I0729 19:32:10.436560 1106090 fix.go:112] recreateIfNeeded on pause-464015: state=Running err=<nil>
	W0729 19:32:10.436595 1106090 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:32:10.438296 1106090 out.go:177] * Updating the running kvm2 "pause-464015" VM ...
	I0729 19:32:10.439682 1106090 machine.go:94] provisionDockerMachine start ...
	I0729 19:32:10.439713 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:10.440047 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:10.443197 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.443642 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.443663 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.443861 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:10.444057 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.444228 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.444396 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:10.444603 1106090 main.go:141] libmachine: Using SSH client type: native
	I0729 19:32:10.444814 1106090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0729 19:32:10.444826 1106090 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:32:10.561157 1106090 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-464015
	
	I0729 19:32:10.561194 1106090 main.go:141] libmachine: (pause-464015) Calling .GetMachineName
	I0729 19:32:10.561521 1106090 buildroot.go:166] provisioning hostname "pause-464015"
	I0729 19:32:10.561557 1106090 main.go:141] libmachine: (pause-464015) Calling .GetMachineName
	I0729 19:32:10.561788 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:10.565161 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.565582 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.565614 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.565817 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:10.566009 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.566231 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.566420 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:10.566632 1106090 main.go:141] libmachine: Using SSH client type: native
	I0729 19:32:10.566929 1106090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0729 19:32:10.566950 1106090 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-464015 && echo "pause-464015" | sudo tee /etc/hostname
	I0729 19:32:10.697691 1106090 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-464015
	
	I0729 19:32:10.697742 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:10.701154 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.701710 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.701752 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.701970 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:10.702223 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.702438 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.702603 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:10.702802 1106090 main.go:141] libmachine: Using SSH client type: native
	I0729 19:32:10.703070 1106090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0729 19:32:10.703096 1106090 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-464015' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-464015/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-464015' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:32:10.824143 1106090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:32:10.824180 1106090 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:32:10.824217 1106090 buildroot.go:174] setting up certificates
	I0729 19:32:10.824236 1106090 provision.go:84] configureAuth start
	I0729 19:32:10.824254 1106090 main.go:141] libmachine: (pause-464015) Calling .GetMachineName
	I0729 19:32:10.824591 1106090 main.go:141] libmachine: (pause-464015) Calling .GetIP
	I0729 19:32:10.827535 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.827878 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.827900 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.828105 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:10.830525 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.830966 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.830995 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.831145 1106090 provision.go:143] copyHostCerts
	I0729 19:32:10.831211 1106090 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:32:10.831227 1106090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:32:10.831290 1106090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:32:10.831410 1106090 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:32:10.831421 1106090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:32:10.831454 1106090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:32:10.831541 1106090 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:32:10.831552 1106090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:32:10.831579 1106090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:32:10.831662 1106090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.pause-464015 san=[127.0.0.1 192.168.50.50 localhost minikube pause-464015]
	I0729 19:32:10.948875 1106090 provision.go:177] copyRemoteCerts
	I0729 19:32:10.948947 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:32:10.948983 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:10.952282 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.952765 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:10.952801 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:10.953045 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:10.953279 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:10.953487 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:10.953664 1106090 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/pause-464015/id_rsa Username:docker}
	I0729 19:32:11.046461 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:32:11.075672 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 19:32:11.106889 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:32:11.136424 1106090 provision.go:87] duration metric: took 312.169084ms to configureAuth
	I0729 19:32:11.136461 1106090 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:32:11.136759 1106090 config.go:182] Loaded profile config "pause-464015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:11.136854 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:11.139945 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:11.140268 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:11.140295 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:11.140561 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:11.140782 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:11.140943 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:11.141133 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:11.141383 1106090 main.go:141] libmachine: Using SSH client type: native
	I0729 19:32:11.141605 1106090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0729 19:32:11.141625 1106090 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:32:18.056717 1106090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:32:18.056746 1106090 machine.go:97] duration metric: took 7.617044613s to provisionDockerMachine
	I0729 19:32:18.056761 1106090 start.go:293] postStartSetup for "pause-464015" (driver="kvm2")
	I0729 19:32:18.056773 1106090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:32:18.056802 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:18.057292 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:32:18.057323 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:18.062787 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.063381 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:18.063407 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.063709 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:18.063948 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:18.064105 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:18.064351 1106090 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/pause-464015/id_rsa Username:docker}
	I0729 19:32:18.158439 1106090 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:32:18.171511 1106090 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:32:18.171552 1106090 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:32:18.171643 1106090 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:32:18.171743 1106090 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:32:18.171860 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:32:18.182821 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:32:18.218618 1106090 start.go:296] duration metric: took 161.843124ms for postStartSetup
	I0729 19:32:18.218664 1106090 fix.go:56] duration metric: took 7.80891688s for fixHost
	I0729 19:32:18.218693 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:18.224452 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.224491 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:18.224514 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.224797 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:18.225006 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:18.225116 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:18.225210 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:18.225324 1106090 main.go:141] libmachine: Using SSH client type: native
	I0729 19:32:18.225489 1106090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.50 22 <nil> <nil>}
	I0729 19:32:18.225495 1106090 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 19:32:18.359031 1106090 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722281538.349313842
	
	I0729 19:32:18.359053 1106090 fix.go:216] guest clock: 1722281538.349313842
	I0729 19:32:18.359060 1106090 fix.go:229] Guest: 2024-07-29 19:32:18.349313842 +0000 UTC Remote: 2024-07-29 19:32:18.218669461 +0000 UTC m=+45.643310401 (delta=130.644381ms)
	I0729 19:32:18.359078 1106090 fix.go:200] guest clock delta is within tolerance: 130.644381ms
	I0729 19:32:18.359083 1106090 start.go:83] releasing machines lock for "pause-464015", held for 7.949378915s
	I0729 19:32:18.359102 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:18.359381 1106090 main.go:141] libmachine: (pause-464015) Calling .GetIP
	I0729 19:32:18.363337 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.363787 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:18.363807 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.363949 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:18.364440 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:18.364601 1106090 main.go:141] libmachine: (pause-464015) Calling .DriverName
	I0729 19:32:18.364701 1106090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:32:18.364733 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:18.364761 1106090 ssh_runner.go:195] Run: cat /version.json
	I0729 19:32:18.364782 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHHostname
	I0729 19:32:18.368003 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.374352 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:18.374355 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:18.374394 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.374405 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHPort
	I0729 19:32:18.374419 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.374437 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:18.374453 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:18.374654 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:18.374696 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHKeyPath
	I0729 19:32:18.374818 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:18.374843 1106090 main.go:141] libmachine: (pause-464015) Calling .GetSSHUsername
	I0729 19:32:18.374986 1106090 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/pause-464015/id_rsa Username:docker}
	I0729 19:32:18.375333 1106090 sshutil.go:53] new ssh client: &{IP:192.168.50.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/pause-464015/id_rsa Username:docker}
	I0729 19:32:18.484479 1106090 ssh_runner.go:195] Run: systemctl --version
	I0729 19:32:18.493722 1106090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:32:18.666903 1106090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:32:18.675758 1106090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:32:18.675832 1106090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:32:18.687495 1106090 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 19:32:18.687531 1106090 start.go:495] detecting cgroup driver to use...
	I0729 19:32:18.687597 1106090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:32:18.710094 1106090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:32:18.730007 1106090 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:32:18.730066 1106090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:32:18.746621 1106090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:32:18.766044 1106090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:32:18.938901 1106090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:32:19.110201 1106090 docker.go:233] disabling docker service ...
	I0729 19:32:19.110266 1106090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:32:19.139325 1106090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:32:19.155798 1106090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:32:19.314174 1106090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:32:19.454047 1106090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:32:19.478220 1106090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:32:19.509063 1106090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:32:19.509140 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.523599 1106090 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:32:19.523675 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.539326 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.557555 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.569178 1106090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:32:19.583669 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.599438 1106090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.613475 1106090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:32:19.625316 1106090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:32:19.636527 1106090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:32:19.647392 1106090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:32:19.803114 1106090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:32:23.060515 1106090 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.257365137s)
	I0729 19:32:23.060544 1106090 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:32:23.060598 1106090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:32:23.066605 1106090 start.go:563] Will wait 60s for crictl version
	I0729 19:32:23.066693 1106090 ssh_runner.go:195] Run: which crictl
	I0729 19:32:23.072094 1106090 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:32:23.113853 1106090 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:32:23.113961 1106090 ssh_runner.go:195] Run: crio --version
	I0729 19:32:23.149193 1106090 ssh_runner.go:195] Run: crio --version
	I0729 19:32:23.196186 1106090 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:32:23.197368 1106090 main.go:141] libmachine: (pause-464015) Calling .GetIP
	I0729 19:32:23.200705 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201260 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:23.201289 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201568 1106090 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:32:23.210137 1106090 kubeadm.go:883] updating cluster {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:32:23.210386 1106090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:32:23.210472 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.274986 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.275010 1106090 crio.go:433] Images already preloaded, skipping extraction
	I0729 19:32:23.275067 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.311365 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.311401 1106090 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:32:23.311412 1106090 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.30.3 crio true true} ...
	I0729 19:32:23.311557 1106090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-464015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:32:23.311648 1106090 ssh_runner.go:195] Run: crio config
	I0729 19:32:23.361857 1106090 cni.go:84] Creating CNI manager for ""
	I0729 19:32:23.361890 1106090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:32:23.361907 1106090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:32:23.361939 1106090 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-464015 NodeName:pause-464015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:32:23.362173 1106090 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-464015"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:32:23.362298 1106090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:32:23.377145 1106090 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:32:23.377228 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:32:23.390203 1106090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 19:32:23.415068 1106090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:32:23.446202 1106090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 19:32:23.471917 1106090 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0729 19:32:23.477475 1106090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:32:23.644724 1106090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:32:23.663898 1106090 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015 for IP: 192.168.50.50
	I0729 19:32:23.663923 1106090 certs.go:194] generating shared ca certs ...
	I0729 19:32:23.663946 1106090 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:32:23.664147 1106090 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:32:23.664212 1106090 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:32:23.664238 1106090 certs.go:256] generating profile certs ...
	I0729 19:32:23.664364 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/client.key
	I0729 19:32:23.664476 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key.228b1ddc
	I0729 19:32:23.664548 1106090 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key
	I0729 19:32:23.664732 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:32:23.664775 1106090 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:32:23.664787 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:32:23.664829 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:32:23.664865 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:32:23.664907 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:32:23.664973 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:32:23.665911 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:32:23.707738 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:32:23.758865 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:32:23.795078 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:32:23.830906 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 19:32:23.867396 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:32:23.897777 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:32:23.929847 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:32:23.959905 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:32:23.992631 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:32:24.106199 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:32:24.287939 1106090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:32:24.404108 1106090 ssh_runner.go:195] Run: openssl version
	I0729 19:32:24.444334 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:32:24.495532 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.518944 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.519016 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.547074 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:32:24.631763 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:32:24.780703 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827898 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827985 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.865716 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:32:24.915150 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:32:24.979807 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007922 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007991 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.058343 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:32:25.153845 1106090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:32:25.174380 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:32:25.181338 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:32:25.191851 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:32:25.204027 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:32:25.211232 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:32:25.224739 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:32:25.234699 1106090 kubeadm.go:392] StartCluster: {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:32:25.234882 1106090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:32:25.234966 1106090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:32:25.315658 1106090 cri.go:89] found id: "841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e"
	I0729 19:32:25.315688 1106090 cri.go:89] found id: "4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121"
	I0729 19:32:25.315694 1106090 cri.go:89] found id: "6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4"
	I0729 19:32:25.315700 1106090 cri.go:89] found id: "ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe"
	I0729 19:32:25.315705 1106090 cri.go:89] found id: "e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1"
	I0729 19:32:25.315711 1106090 cri.go:89] found id: "f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5"
	I0729 19:32:25.315716 1106090 cri.go:89] found id: "f7e2f6542993c27494df40acb566a59b0ec9380d92415bf86ba3cf30d637c09e"
	I0729 19:32:25.315721 1106090 cri.go:89] found id: "c93fd01a84a8e0ffc9cf0c4ce1fa2c3e29c507c0cdde637fa39766ccefeb76b0"
	I0729 19:32:25.315727 1106090 cri.go:89] found id: "f2aefe8cc3d7017580e9cd35ff69e152c10b7823a0bc7b7643df5dab76bb4239"
	I0729 19:32:25.315737 1106090 cri.go:89] found id: "f2c9e7b86f1009db6c84e377ee6fdcae0bfafc0957af91280bf142f86dadd4b0"
	I0729 19:32:25.315743 1106090 cri.go:89] found id: ""
	I0729 19:32:25.315798 1106090 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-464015 -n pause-464015
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-464015 logs -n 25
E0729 19:33:00.968604 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-464015 logs -n 25: (3.684133242s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo docker                           | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo                                  | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo containerd                       | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo find                             | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo crio                             | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-184620                                       | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	| start   | -p custom-flannel-184620                             | custom-flannel-184620 | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 pgrep -a                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:32:24
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:32:24.845746 1107895 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:32:24.845986 1107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:32:24.846027 1107895 out.go:304] Setting ErrFile to fd 2...
	I0729 19:32:24.846044 1107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:32:24.846374 1107895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:32:24.847318 1107895 out.go:298] Setting JSON to false
	I0729 19:32:24.848966 1107895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11697,"bootTime":1722269848,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:32:24.849090 1107895 start.go:139] virtualization: kvm guest
	I0729 19:32:24.851670 1107895 out.go:177] * [custom-flannel-184620] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:32:24.853439 1107895 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:32:24.853519 1107895 notify.go:220] Checking for updates...
	I0729 19:32:24.856065 1107895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:32:24.857373 1107895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:32:24.858741 1107895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:32:24.859957 1107895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:32:24.861163 1107895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:32:24.862927 1107895 config.go:182] Loaded profile config "calico-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863112 1107895 config.go:182] Loaded profile config "kindnet-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863334 1107895 config.go:182] Loaded profile config "pause-464015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863481 1107895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:32:24.906678 1107895 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:32:24.907860 1107895 start.go:297] selected driver: kvm2
	I0729 19:32:24.907881 1107895 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:32:24.907896 1107895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:32:24.908769 1107895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:32:24.908857 1107895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:32:24.925816 1107895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:32:24.925862 1107895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:32:24.926071 1107895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:32:24.926103 1107895 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 19:32:24.926111 1107895 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 19:32:24.926163 1107895 start.go:340] cluster config:
	{Name:custom-flannel-184620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-184620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:32:24.926255 1107895 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:32:24.927950 1107895 out.go:177] * Starting "custom-flannel-184620" primary control-plane node in "custom-flannel-184620" cluster
	I0729 19:32:24.929318 1107895 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:32:24.929363 1107895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:32:24.929372 1107895 cache.go:56] Caching tarball of preloaded images
	I0729 19:32:24.929464 1107895 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:32:24.929477 1107895 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:32:24.929605 1107895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/config.json ...
	I0729 19:32:24.929646 1107895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/config.json: {Name:mk5453bac3d654cb42ac26382f99c3498ff9dc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:32:24.929809 1107895 start.go:360] acquireMachinesLock for custom-flannel-184620: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:32:24.929855 1107895 start.go:364] duration metric: took 24.142µs to acquireMachinesLock for "custom-flannel-184620"
	I0729 19:32:24.929878 1107895 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-184620 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-184620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:32:24.929964 1107895 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 19:32:22.037232 1105342 node_ready.go:53] node "kindnet-184620" has status "Ready":"False"
	I0729 19:32:24.038570 1105342 node_ready.go:53] node "kindnet-184620" has status "Ready":"False"
	I0729 19:32:23.197368 1106090 main.go:141] libmachine: (pause-464015) Calling .GetIP
	I0729 19:32:23.200705 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201260 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:23.201289 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201568 1106090 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:32:23.210137 1106090 kubeadm.go:883] updating cluster {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:32:23.210386 1106090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:32:23.210472 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.274986 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.275010 1106090 crio.go:433] Images already preloaded, skipping extraction
	I0729 19:32:23.275067 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.311365 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.311401 1106090 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:32:23.311412 1106090 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.30.3 crio true true} ...
	I0729 19:32:23.311557 1106090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-464015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:32:23.311648 1106090 ssh_runner.go:195] Run: crio config
	I0729 19:32:23.361857 1106090 cni.go:84] Creating CNI manager for ""
	I0729 19:32:23.361890 1106090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:32:23.361907 1106090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:32:23.361939 1106090 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-464015 NodeName:pause-464015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:32:23.362173 1106090 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-464015"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:32:23.362298 1106090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:32:23.377145 1106090 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:32:23.377228 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:32:23.390203 1106090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 19:32:23.415068 1106090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:32:23.446202 1106090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 19:32:23.471917 1106090 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0729 19:32:23.477475 1106090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:32:23.644724 1106090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:32:23.663898 1106090 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015 for IP: 192.168.50.50
	I0729 19:32:23.663923 1106090 certs.go:194] generating shared ca certs ...
	I0729 19:32:23.663946 1106090 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:32:23.664147 1106090 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:32:23.664212 1106090 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:32:23.664238 1106090 certs.go:256] generating profile certs ...
	I0729 19:32:23.664364 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/client.key
	I0729 19:32:23.664476 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key.228b1ddc
	I0729 19:32:23.664548 1106090 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key
	I0729 19:32:23.664732 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:32:23.664775 1106090 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:32:23.664787 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:32:23.664829 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:32:23.664865 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:32:23.664907 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:32:23.664973 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:32:23.665911 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:32:23.707738 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:32:23.758865 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:32:23.795078 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:32:23.830906 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 19:32:23.867396 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:32:23.897777 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:32:23.929847 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:32:23.959905 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:32:23.992631 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:32:24.106199 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:32:24.287939 1106090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:32:24.404108 1106090 ssh_runner.go:195] Run: openssl version
	I0729 19:32:24.444334 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:32:24.495532 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.518944 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.519016 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.547074 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:32:24.631763 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:32:24.780703 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827898 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827985 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.865716 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:32:24.915150 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:32:24.979807 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007922 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007991 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.058343 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:32:25.153845 1106090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:32:25.174380 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:32:25.181338 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:32:25.191851 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:32:25.204027 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:32:25.211232 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:32:25.224739 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:32:25.234699 1106090 kubeadm.go:392] StartCluster: {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:32:25.234882 1106090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:32:25.234966 1106090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:32:25.315658 1106090 cri.go:89] found id: "841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e"
	I0729 19:32:25.315688 1106090 cri.go:89] found id: "4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121"
	I0729 19:32:25.315694 1106090 cri.go:89] found id: "6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4"
	I0729 19:32:25.315700 1106090 cri.go:89] found id: "ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe"
	I0729 19:32:25.315705 1106090 cri.go:89] found id: "e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1"
	I0729 19:32:25.315711 1106090 cri.go:89] found id: "f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5"
	I0729 19:32:25.315716 1106090 cri.go:89] found id: "f7e2f6542993c27494df40acb566a59b0ec9380d92415bf86ba3cf30d637c09e"
	I0729 19:32:25.315721 1106090 cri.go:89] found id: "c93fd01a84a8e0ffc9cf0c4ce1fa2c3e29c507c0cdde637fa39766ccefeb76b0"
	I0729 19:32:25.315727 1106090 cri.go:89] found id: "f2aefe8cc3d7017580e9cd35ff69e152c10b7823a0bc7b7643df5dab76bb4239"
	I0729 19:32:25.315737 1106090 cri.go:89] found id: "f2c9e7b86f1009db6c84e377ee6fdcae0bfafc0957af91280bf142f86dadd4b0"
	I0729 19:32:25.315743 1106090 cri.go:89] found id: ""
	I0729 19:32:25.315798 1106090 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.259759870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281580259727708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b714ccc-35be-48ee-b84d-31e630d5d7cc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.261068005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=680840a1-c873-43da-a2e1-e9b7b4f726cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.261222499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=680840a1-c873-43da-a2e1-e9b7b4f726cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.261539940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=680840a1-c873-43da-a2e1-e9b7b4f726cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.318755589Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a4b7743-747c-41ef-9f33-164544ad95a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.318847428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a4b7743-747c-41ef-9f33-164544ad95a1 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.320621701Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d68f323c-e46c-4f17-84c1-817a361a79b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.321304732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281580321268577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d68f323c-e46c-4f17-84c1-817a361a79b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.321911042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b6e12cd-178a-43f3-bc39-d6901d000c11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.322011434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b6e12cd-178a-43f3-bc39-d6901d000c11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.322439818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b6e12cd-178a-43f3-bc39-d6901d000c11 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.378435561Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f835a44-882f-4039-a936-36b68c195e04 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.378579461Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f835a44-882f-4039-a936-36b68c195e04 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.379620762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82a583df-8b3b-461e-817f-70bad42e289a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.379972809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281580379947647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82a583df-8b3b-461e-817f-70bad42e289a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.380463029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=accf9616-64b6-47f7-b00c-8ee7d1739348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.380512612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=accf9616-64b6-47f7-b00c-8ee7d1739348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.381090349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=accf9616-64b6-47f7-b00c-8ee7d1739348 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.426670156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4fe59e55-b93d-4033-8d36-78d2e3bbee77 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.426757950Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4fe59e55-b93d-4033-8d36-78d2e3bbee77 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.427875922Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f22292f-a2be-439e-9072-7cb126a2536a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.428434344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281580428401045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f22292f-a2be-439e-9072-7cb126a2536a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.428976435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=882ece9a-895f-4f3f-b66a-1e35d51fb9cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.429047772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=882ece9a-895f-4f3f-b66a-1e35d51fb9cd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:00 pause-464015 crio[2225]: time="2024-07-29 19:33:00.429459891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=882ece9a-895f-4f3f-b66a-1e35d51fb9cd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	20617830601df       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   22 seconds ago       Running             kube-scheduler            2                   161d8d86c5dd0       kube-scheduler-pause-464015
	3de9287727286       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   22 seconds ago       Running             kube-apiserver            2                   1ff59d2b8563f       kube-apiserver-pause-464015
	cc778cb13a1d8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   22 seconds ago       Running             kube-controller-manager   2                   158bae2c06db0       kube-controller-manager-pause-464015
	b62ff99eeaf9e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   22 seconds ago       Running             etcd                      2                   ac5a07f383795       etcd-pause-464015
	669aff584c28d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   35 seconds ago       Running             coredns                   1                   f4677d0834576       coredns-7db6d8ff4d-j6d5l
	49088d6b4a706       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   36 seconds ago       Running             kube-proxy                1                   6fc782ae0ba22       kube-proxy-6bztz
	841e7e305bb95       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   36 seconds ago       Exited              kube-apiserver            1                   1ff59d2b8563f       kube-apiserver-pause-464015
	4b7b16b0bd9c0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   36 seconds ago       Exited              kube-controller-manager   1                   158bae2c06db0       kube-controller-manager-pause-464015
	6edbae7cc2a33       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   36 seconds ago       Exited              kube-scheduler            1                   161d8d86c5dd0       kube-scheduler-pause-464015
	ff1daf93690fc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   36 seconds ago       Exited              etcd                      1                   ac5a07f383795       etcd-pause-464015
	e093776f38c90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d567aaa9c08c2       coredns-7db6d8ff4d-j6d5l
	f9fb15b1131c5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   cb68f6f36a8c2       kube-proxy-6bztz
	
	
	==> coredns [669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36279 - 53358 "HINFO IN 2526986688340446305.175117106379564337. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00961475s
	
	
	==> coredns [e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56523 - 4198 "HINFO IN 636979211025756034.7820473512945332659. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00897904s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-464015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-464015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=pause-464015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_31_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:31:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-464015
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:32:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    pause-464015
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6f81a3dca5b459e84987ee70e6a92a0
	  System UUID:                f6f81a3d-ca5b-459e-8498-7ee70e6a92a0
	  Boot ID:                    767e5d6c-01e0-44f1-b969-1706327dab4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-j6d5l                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     95s
	  kube-system                 etcd-pause-464015                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kube-apiserver-pause-464015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-pause-464015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-proxy-6bztz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-pause-464015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 91s                kube-proxy       
	  Normal   Starting                 32s                kube-proxy       
	  Normal   NodeHasSufficientMemory  109s               kubelet          Node pause-464015 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  109s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    109s               kubelet          Node pause-464015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     109s               kubelet          Node pause-464015 status is now: NodeHasSufficientPID
	  Normal   Starting                 109s               kubelet          Starting kubelet.
	  Normal   NodeReady                108s               kubelet          Node pause-464015 status is now: NodeReady
	  Normal   RegisteredNode           96s                node-controller  Node pause-464015 event: Registered Node pause-464015 in Controller
	  Warning  ContainerGCFailed        49s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 24s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-464015 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-464015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-464015 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8s                 node-controller  Node pause-464015 event: Registered Node pause-464015 in Controller
	
	
	==> dmesg <==
	[  +0.059066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062101] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.192147] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140008] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.279583] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Jul29 19:31] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062960] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.672450] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.417821] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.626812] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.079854] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.421422] systemd-fstab-generator[1487]: Ignoring "noauto" option for root device
	[  +0.092305] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 19:32] systemd-fstab-generator[2143]: Ignoring "noauto" option for root device
	[  +0.089869] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.068059] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.212310] systemd-fstab-generator[2169]: Ignoring "noauto" option for root device
	[  +0.159003] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.332600] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[  +3.838800] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	[  +0.521426] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.190497] kauditd_printk_skb: 63 callbacks suppressed
	[  +1.230515] systemd-fstab-generator[3107]: Ignoring "noauto" option for root device
	[ +11.463589] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.871800] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	
	
	==> etcd [b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0] <==
	{"level":"info","ts":"2024-07-29T19:32:38.650048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b98348baa467fce","local-member-id":"c0dcbd712fbd8799","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:32:38.650241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:32:38.655249Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:32:38.658339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c0dcbd712fbd8799","initial-advertise-peer-urls":["https://192.168.50.50:2380"],"listen-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:32:38.658395Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:32:38.65851Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:38.660198Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:39.60922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.60928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.609312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgPreVoteResp from c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.60933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgVoteResp from c0dcbd712fbd8799 at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0dcbd712fbd8799 elected leader c0dcbd712fbd8799 at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.61438Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:pause-464015 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:32:39.614515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:39.616127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-07-29T19:32:39.616643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:39.618055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:32:39.62723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:39.627276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-07-29T19:32:57.825865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.511863ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9771000295659295559 > lease_revoke:<id:079990fffb02521c>","response":"size:28"}
	{"level":"info","ts":"2024-07-29T19:32:57.825959Z","caller":"traceutil/trace.go:171","msg":"trace[543841593] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:519; }","duration":"134.90408ms","start":"2024-07-29T19:32:57.69104Z","end":"2024-07-29T19:32:57.825944Z","steps":["trace[543841593] 'read index received'  (duration: 34.833µs)","trace[543841593] 'applied index is now lower than readState.Index'  (duration: 134.868485ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:32:57.826037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.983527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T19:32:57.826051Z","caller":"traceutil/trace.go:171","msg":"trace[815592013] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:474; }","duration":"135.041041ms","start":"2024-07-29T19:32:57.691006Z","end":"2024-07-29T19:32:57.826047Z","steps":["trace[815592013] 'agreement among raft nodes before linearized reading'  (duration: 134.97749ms)"],"step_count":1}
	
	
	==> etcd [ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe] <==
	{"level":"info","ts":"2024-07-29T19:32:25.250061Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:26.153215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgPreVoteResp from c0dcbd712fbd8799 at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgVoteResp from c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0dcbd712fbd8799 elected leader c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.171564Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:pause-464015 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:32:26.171816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:26.173215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:26.173261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:26.17328Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:26.183198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-07-29T19:32:26.19739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:32:36.032497Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T19:32:36.032581Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-464015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	{"level":"warn","ts":"2024-07-29T19:32:36.03272Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.032752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.033387Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.033493Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T19:32:36.033601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c0dcbd712fbd8799","current-leader-member-id":"c0dcbd712fbd8799"}
	{"level":"info","ts":"2024-07-29T19:32:36.03757Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:36.037796Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:36.03791Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-464015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	
	
	==> kernel <==
	 19:33:01 up 2 min,  0 users,  load average: 1.25, 0.45, 0.16
	Linux pause-464015 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b] <==
	I0729 19:32:41.176081       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:32:41.176116       1 policy_source.go:224] refreshing policies
	I0729 19:32:41.176353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 19:32:41.205026       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 19:32:41.205090       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 19:32:41.205098       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 19:32:41.208419       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 19:32:41.209065       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 19:32:41.211908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 19:32:41.212643       1 aggregator.go:165] initial CRD sync complete...
	I0729 19:32:41.212684       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 19:32:41.212693       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 19:32:41.212699       1 cache.go:39] Caches are synced for autoregister controller
	I0729 19:32:41.214450       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 19:32:41.214645       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 19:32:41.222754       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 19:32:42.008851       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 19:32:42.324395       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.50]
	I0729 19:32:42.326105       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 19:32:42.333445       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 19:32:42.514187       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 19:32:42.529429       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 19:32:42.571768       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 19:32:42.619982       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 19:32:42.629240       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e] <==
	I0729 19:32:28.909806       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 19:32:28.914410       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 19:32:28.914479       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 19:32:28.916555       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 19:32:28.916669       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 19:32:28.929125       1 controller.go:157] Shutting down quota evaluator
	I0729 19:32:28.929904       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.930594       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931232       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931277       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931292       1 controller.go:176] quota evaluator worker shutdown
	W0729 19:32:29.599550       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:29.599843       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:30.598834       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:30.599928       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:31.599607       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:31.599761       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:32.599080       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:32.599413       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:33.599113       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:33.599126       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0729 19:32:34.599383       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:34.599401       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0729 19:32:35.599407       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:35.600230       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-controller-manager [4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121] <==
	I0729 19:32:26.055431       1 serving.go:380] Generated self-signed cert in-memory
	I0729 19:32:27.339439       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 19:32:27.339486       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:27.345362       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 19:32:27.346540       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 19:32:27.346717       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:32:27.346850       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573] <==
	I0729 19:32:53.526187       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 19:32:53.530637       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 19:32:53.530750       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 19:32:53.533217       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 19:32:53.536856       1 shared_informer.go:320] Caches are synced for deployment
	I0729 19:32:53.538110       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 19:32:53.543132       1 shared_informer.go:320] Caches are synced for disruption
	I0729 19:32:53.547506       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 19:32:53.549097       1 shared_informer.go:320] Caches are synced for taint
	I0729 19:32:53.549398       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 19:32:53.549898       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-464015"
	I0729 19:32:53.551018       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 19:32:53.566349       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 19:32:53.582976       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 19:32:53.583662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="251.16µs"
	I0729 19:32:53.632647       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 19:32:53.654419       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 19:32:53.707415       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 19:32:53.710406       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 19:32:53.713251       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 19:32:53.716739       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:32:53.726766       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:32:54.152422       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:32:54.196690       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:32:54.196743       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50] <==
	W0729 19:32:28.976809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:28.976884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:28.976970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:28.977030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:29.911029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:29.911248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:30.024584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:30.024651       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:30.215693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:30.215838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:31.959021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:31.959087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:32.566768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:32.566974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:32.933085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:32.933374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:36.147003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:36.147061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:36.378218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:36.378320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:38.182895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:38.182938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	I0729 19:32:47.375189       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:32:48.874993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:32:49.376055       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5] <==
	I0729 19:31:29.678406       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:31:29.695665       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.50"]
	I0729 19:31:29.752024       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:31:29.752194       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:31:29.752242       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:31:29.755649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:31:29.756265       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:31:29.756316       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:31:29.759210       1 config.go:192] "Starting service config controller"
	I0729 19:31:29.759598       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:31:29.759664       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:31:29.759688       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:31:29.761943       1 config.go:319] "Starting node config controller"
	I0729 19:31:29.761979       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:31:29.859849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:31:29.859944       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:31:29.862114       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7] <==
	I0729 19:32:39.291293       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:32:41.100602       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:32:41.101113       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:32:41.101237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:32:41.101263       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:32:41.124091       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:32:41.124123       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:41.129921       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:32:41.130027       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:41.131026       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:32:41.131099       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0729 19:32:41.139519       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:32:41.139581       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:32:42.630741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4] <==
	I0729 19:32:27.386826       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:32:28.705806       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:32:28.706773       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:32:28.706897       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:32:28.706931       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:32:28.731943       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:32:28.732027       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:28.736323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:32:28.737441       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:32:28.745766       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:28.737478       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:32:28.846400       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:35.891446       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 19:32:35.892089       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 19:32:35.892428       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 19:32:35.892922       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.164724    3114 scope.go:117] "RemoveContainer" containerID="841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.167732    3114 scope.go:117] "RemoveContainer" containerID="6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.178428    3114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.50:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-464015.17e6c5f3a342ec3a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-464015,UID:pause-464015,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-464015,},FirstTimestamp:2024-07-29 19:32:37.722590266 +0000 UTC m=+0.134556810,LastTimestamp:2024-07-29 19:32:37.722590266 +0000 UTC m=+0.134556810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-464015,}"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.357631    3114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-464015?timeout=10s\": dial tcp 192.168.50.50:8443: connect: connection refused" interval="800ms"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.456299    3114 kubelet_node_status.go:73] "Attempting to register node" node="pause-464015"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.460302    3114 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.50:8443: connect: connection refused" node="pause-464015"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: W0729 19:32:38.559856    3114 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.559920    3114 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	Jul 29 19:32:39 pause-464015 kubelet[3114]: I0729 19:32:39.262454    3114 kubelet_node_status.go:73] "Attempting to register node" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.196692    3114 kubelet_node_status.go:112] "Node was previously registered" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.197276    3114 kubelet_node_status.go:76] "Successfully registered node" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.199761    3114 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.201493    3114 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.220575    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.321725    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.422760    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.523639    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.624640    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.725398    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.737211    3114 apiserver.go:52] "Watching apiserver"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.739710    3114 topology_manager.go:215] "Topology Admit Handler" podUID="e11c129e-19c1-460c-9d15-10a235d29e06" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j6d5l"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.740029    3114 topology_manager.go:215] "Topology Admit Handler" podUID="8eea26ce-59ee-46bd-a9c4-18477db50d96" podNamespace="kube-system" podName="kube-proxy-6bztz"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.839796    3114 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.914784    3114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eea26ce-59ee-46bd-a9c4-18477db50d96-xtables-lock\") pod \"kube-proxy-6bztz\" (UID: \"8eea26ce-59ee-46bd-a9c4-18477db50d96\") " pod="kube-system/kube-proxy-6bztz"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.914882    3114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eea26ce-59ee-46bd-a9c4-18477db50d96-lib-modules\") pod \"kube-proxy-6bztz\" (UID: \"8eea26ce-59ee-46bd-a9c4-18477db50d96\") " pod="kube-system/kube-proxy-6bztz"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:32:59.810673 1108437 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-464015 -n pause-464015
helpers_test.go:261: (dbg) Run:  kubectl --context pause-464015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-464015 -n pause-464015
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-464015 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-464015 logs -n 25: (1.72164652s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo                                  | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo cat                              | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo containerd                       | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo systemctl                        | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo find                             | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-184620 sudo crio                             | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-184620                                       | auto-184620           | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	| start   | -p custom-flannel-184620                             | custom-flannel-184620 | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 pgrep -a                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | kubelet                                              |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo cat                           | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:32 UTC |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo crictl                        | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:32 UTC | 29 Jul 24 19:33 UTC |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo crictl                        | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC | 29 Jul 24 19:33 UTC |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo find                          | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC | 29 Jul 24 19:33 UTC |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo ip a s                        | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC | 29 Jul 24 19:33 UTC |
	| ssh     | -p kindnet-184620 sudo ip r s                        | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC | 29 Jul 24 19:33 UTC |
	| ssh     | -p kindnet-184620 sudo                               | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC | 29 Jul 24 19:33 UTC |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p kindnet-184620 sudo                               | kindnet-184620        | jenkins | v1.33.1 | 29 Jul 24 19:33 UTC |                     |
	|         | iptables -t nat -L -n -v                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:32:24
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:32:24.845746 1107895 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:32:24.845986 1107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:32:24.846027 1107895 out.go:304] Setting ErrFile to fd 2...
	I0729 19:32:24.846044 1107895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:32:24.846374 1107895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:32:24.847318 1107895 out.go:298] Setting JSON to false
	I0729 19:32:24.848966 1107895 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11697,"bootTime":1722269848,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:32:24.849090 1107895 start.go:139] virtualization: kvm guest
	I0729 19:32:24.851670 1107895 out.go:177] * [custom-flannel-184620] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:32:24.853439 1107895 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:32:24.853519 1107895 notify.go:220] Checking for updates...
	I0729 19:32:24.856065 1107895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:32:24.857373 1107895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:32:24.858741 1107895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:32:24.859957 1107895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:32:24.861163 1107895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:32:24.862927 1107895 config.go:182] Loaded profile config "calico-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863112 1107895 config.go:182] Loaded profile config "kindnet-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863334 1107895 config.go:182] Loaded profile config "pause-464015": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:32:24.863481 1107895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:32:24.906678 1107895 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:32:24.907860 1107895 start.go:297] selected driver: kvm2
	I0729 19:32:24.907881 1107895 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:32:24.907896 1107895 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:32:24.908769 1107895 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:32:24.908857 1107895 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:32:24.925816 1107895 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:32:24.925862 1107895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:32:24.926071 1107895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:32:24.926103 1107895 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0729 19:32:24.926111 1107895 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0729 19:32:24.926163 1107895 start.go:340] cluster config:
	{Name:custom-flannel-184620 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-184620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:32:24.926255 1107895 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:32:24.927950 1107895 out.go:177] * Starting "custom-flannel-184620" primary control-plane node in "custom-flannel-184620" cluster
	I0729 19:32:24.929318 1107895 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:32:24.929363 1107895 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:32:24.929372 1107895 cache.go:56] Caching tarball of preloaded images
	I0729 19:32:24.929464 1107895 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:32:24.929477 1107895 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:32:24.929605 1107895 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/config.json ...
	I0729 19:32:24.929646 1107895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/config.json: {Name:mk5453bac3d654cb42ac26382f99c3498ff9dc70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:32:24.929809 1107895 start.go:360] acquireMachinesLock for custom-flannel-184620: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:32:24.929855 1107895 start.go:364] duration metric: took 24.142µs to acquireMachinesLock for "custom-flannel-184620"
	I0729 19:32:24.929878 1107895 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-184620 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-184620 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:32:24.929964 1107895 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 19:32:22.037232 1105342 node_ready.go:53] node "kindnet-184620" has status "Ready":"False"
	I0729 19:32:24.038570 1105342 node_ready.go:53] node "kindnet-184620" has status "Ready":"False"
	I0729 19:32:23.197368 1106090 main.go:141] libmachine: (pause-464015) Calling .GetIP
	I0729 19:32:23.200705 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201260 1106090 main.go:141] libmachine: (pause-464015) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:64:0f", ip: ""} in network mk-pause-464015: {Iface:virbr2 ExpiryTime:2024-07-29 20:30:46 +0000 UTC Type:0 Mac:52:54:00:bf:64:0f Iaid: IPaddr:192.168.50.50 Prefix:24 Hostname:pause-464015 Clientid:01:52:54:00:bf:64:0f}
	I0729 19:32:23.201289 1106090 main.go:141] libmachine: (pause-464015) DBG | domain pause-464015 has defined IP address 192.168.50.50 and MAC address 52:54:00:bf:64:0f in network mk-pause-464015
	I0729 19:32:23.201568 1106090 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:32:23.210137 1106090 kubeadm.go:883] updating cluster {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:32:23.210386 1106090 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:32:23.210472 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.274986 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.275010 1106090 crio.go:433] Images already preloaded, skipping extraction
	I0729 19:32:23.275067 1106090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:32:23.311365 1106090 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:32:23.311401 1106090 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:32:23.311412 1106090 kubeadm.go:934] updating node { 192.168.50.50 8443 v1.30.3 crio true true} ...
	I0729 19:32:23.311557 1106090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-464015 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:32:23.311648 1106090 ssh_runner.go:195] Run: crio config
	I0729 19:32:23.361857 1106090 cni.go:84] Creating CNI manager for ""
	I0729 19:32:23.361890 1106090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:32:23.361907 1106090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:32:23.361939 1106090 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.50 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-464015 NodeName:pause-464015 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:32:23.362173 1106090 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-464015"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:32:23.362298 1106090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:32:23.377145 1106090 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:32:23.377228 1106090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:32:23.390203 1106090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 19:32:23.415068 1106090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:32:23.446202 1106090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 19:32:23.471917 1106090 ssh_runner.go:195] Run: grep 192.168.50.50	control-plane.minikube.internal$ /etc/hosts
	I0729 19:32:23.477475 1106090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:32:23.644724 1106090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:32:23.663898 1106090 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015 for IP: 192.168.50.50
	I0729 19:32:23.663923 1106090 certs.go:194] generating shared ca certs ...
	I0729 19:32:23.663946 1106090 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:32:23.664147 1106090 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:32:23.664212 1106090 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:32:23.664238 1106090 certs.go:256] generating profile certs ...
	I0729 19:32:23.664364 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/client.key
	I0729 19:32:23.664476 1106090 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key.228b1ddc
	I0729 19:32:23.664548 1106090 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key
	I0729 19:32:23.664732 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:32:23.664775 1106090 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:32:23.664787 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:32:23.664829 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:32:23.664865 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:32:23.664907 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:32:23.664973 1106090 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:32:23.665911 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:32:23.707738 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:32:23.758865 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:32:23.795078 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:32:23.830906 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 19:32:23.867396 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:32:23.897777 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:32:23.929847 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/pause-464015/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:32:23.959905 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:32:23.992631 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:32:24.106199 1106090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:32:24.287939 1106090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:32:24.404108 1106090 ssh_runner.go:195] Run: openssl version
	I0729 19:32:24.444334 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:32:24.495532 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.518944 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.519016 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:32:24.547074 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:32:24.631763 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:32:24.780703 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827898 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.827985 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:32:24.865716 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:32:24.915150 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:32:24.979807 1106090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007922 1106090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.007991 1106090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:32:25.058343 1106090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:32:25.153845 1106090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:32:25.174380 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:32:25.181338 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:32:25.191851 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:32:25.204027 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:32:25.211232 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:32:25.224739 1106090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:32:25.234699 1106090 kubeadm.go:392] StartCluster: {Name:pause-464015 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-464015 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.50 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:32:25.234882 1106090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:32:25.234966 1106090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:32:25.315658 1106090 cri.go:89] found id: "841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e"
	I0729 19:32:25.315688 1106090 cri.go:89] found id: "4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121"
	I0729 19:32:25.315694 1106090 cri.go:89] found id: "6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4"
	I0729 19:32:25.315700 1106090 cri.go:89] found id: "ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe"
	I0729 19:32:25.315705 1106090 cri.go:89] found id: "e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1"
	I0729 19:32:25.315711 1106090 cri.go:89] found id: "f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5"
	I0729 19:32:25.315716 1106090 cri.go:89] found id: "f7e2f6542993c27494df40acb566a59b0ec9380d92415bf86ba3cf30d637c09e"
	I0729 19:32:25.315721 1106090 cri.go:89] found id: "c93fd01a84a8e0ffc9cf0c4ce1fa2c3e29c507c0cdde637fa39766ccefeb76b0"
	I0729 19:32:25.315727 1106090 cri.go:89] found id: "f2aefe8cc3d7017580e9cd35ff69e152c10b7823a0bc7b7643df5dab76bb4239"
	I0729 19:32:25.315737 1106090 cri.go:89] found id: "f2c9e7b86f1009db6c84e377ee6fdcae0bfafc0957af91280bf142f86dadd4b0"
	I0729 19:32:25.315743 1106090 cri.go:89] found id: ""
	I0729 19:32:25.315798 1106090 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.750728865Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-j6d5l,Uid:e11c129e-19c1-460c-9d15-10a235d29e06,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544248711789,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:31:28.071442970Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-464015,Uid:392448e9cc04dc57679875fe40d7ddbc,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544239441613,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.50:8443,kubernetes.io/config.hash: 392448e9cc04dc57679875fe40d7ddbc,kubernetes.io/config.seen: 2024-07-29T19:31:12.655881556Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-6bztz,Uid:8eea26ce-59ee-46bd-a9c4-18477db50d96,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544097579238,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:31:28.100851300Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-464015,Uid:5d39a0af6b9fc7b6fd082fdc3066f5e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544078472858,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5d39a0af6b9fc7b6fd082fdc3066f5e3,kubernetes.io/config.seen: 2024-07-29T19:31:12.655876037Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{
Id:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-464015,Uid:894c94b69ed750ac73d1f00d869bf369,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544069634394,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 894c94b69ed750ac73d1f00d869bf369,kubernetes.io/config.seen: 2024-07-29T19:31:12.655879185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&PodSandboxMetadata{Name:etcd-pause-464015,Uid:02d6444c82d3bc2e023c903971c2842f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722281544057376981,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.50:2379,kubernetes.io/config.hash: 02d6444c82d3bc2e023c903971c2842f,kubernetes.io/config.seen: 2024-07-29T19:31:12.655880382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0fda5f23-a757-40d7-9826-5d6f612fb442 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.751829947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c63e9cdb-111e-47f3-a177-434d137b7585 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.751913122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c63e9cdb-111e-47f3-a177-434d137b7585 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.752131577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c63e9cdb-111e-47f3-a177-434d137b7585 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.760297143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f9ae629-468e-41f0-881d-da8301138a2f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.760413940Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f9ae629-468e-41f0-881d-da8301138a2f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.762770683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5e62994-4c09-47b0-bb2f-3fd561282cba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.763297373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281583763267440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5e62994-4c09-47b0-bb2f-3fd561282cba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.764127132Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e89f1e6b-85fc-4c39-97e8-35941ea2bd66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.764282361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e89f1e6b-85fc-4c39-97e8-35941ea2bd66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.764654214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e89f1e6b-85fc-4c39-97e8-35941ea2bd66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.824055725Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a9bf594-de37-4e8e-b9aa-e8eeec3c4413 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.824129636Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a9bf594-de37-4e8e-b9aa-e8eeec3c4413 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.826235511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efba843b-8fe7-4b48-86dd-e530de237a3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.826958144Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281583826829762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efba843b-8fe7-4b48-86dd-e530de237a3f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.827703378Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d48a2c2-afa8-416e-9bf8-3b18f4668fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.827773915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d48a2c2-afa8-416e-9bf8-3b18f4668fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.828098288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d48a2c2-afa8-416e-9bf8-3b18f4668fa4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.880437522Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=001bb6d4-6f86-4d25-8f3c-4a4f275a6d11 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.880558431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=001bb6d4-6f86-4d25-8f3c-4a4f275a6d11 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.881750120Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f15a9ee6-de3b-4267-a5e0-98706df210b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.882332864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722281583882299976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f15a9ee6-de3b-4267-a5e0-98706df210b7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.882843234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af23ba5e-e3a4-47fd-a758-b96b278c88f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.882937077Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af23ba5e-e3a4-47fd-a758-b96b278c88f0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:33:03 pause-464015 crio[2225]: time="2024-07-29 19:33:03.883421746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722281558246575109,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722281558198385393,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722281558211772073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722281558227034810,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.container.hash: b941963f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63,PodSandboxId:f4677d0834576c8bd5bf8d215718ab7f7d09bf36d6b046ce05306c305430d787,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722281545527265652,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50,PodSandboxId:6fc782ae0ba2261407e6f708fdee5908604f655278d93b40b5124b931ba317c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722281544716052782,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io
.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e,PodSandboxId:1ff59d2b8563faab674bd0b7d0817a727d80db15e39797b8204c2f3b0fbc44dc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722281544601901356,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 392448e9cc04dc57679875fe40d7ddbc,},Annotations:map[string]string{io.kubernetes.contain
er.hash: b941963f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121,PodSandboxId:158bae2c06db05b5cfc89d1224d60bd03ec34c2869d6401431d87de924cc26c2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722281544493556455,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d39a0af6b9fc7b6fd082fdc3066f5e3,},Annotations:map[string]string{io.kubernetes
.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe,PodSandboxId:ac5a07f3837953f7ba4919c9d6bfb1aa0d35aea289a5996bd2c0a5b7a291d174,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722281544379290082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02d6444c82d3bc2e023c903971c2842f,},Annotations:map[string]string{io.kubernetes.container.hash: cc0bdb8b,io.kubernetes.container
.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4,PodSandboxId:161d8d86c5dd0c9b5655ba6044ab69142c386ceee59a9f58aecd252b2bf7f31e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722281544421655845,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-464015,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894c94b69ed750ac73d1f00d869bf369,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1,PodSandboxId:d567aaa9c08c259d790c56d77e1150347479910bfbccae4d7369864c1687d872,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722281490198586073,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-j6d5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e11c129e-19c1-460c-9d15-10a235d29e06,},Annotations:map[string]string{io.kubernetes.container.hash: 469d745e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5,PodSandboxId:cb68f6f36a8c2304bd6e9e276f0e6c8f46c94874e404cd048653ebf290e9d119,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722281489421297743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6bztz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8eea26ce-59ee-46bd-a9c4-18477db50d96,},Annotations:map[string]string{io.kubernetes.container.hash: fdfe374d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af23ba5e-e3a4-47fd-a758-b96b278c88f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	20617830601df       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   25 seconds ago       Running             kube-scheduler            2                   161d8d86c5dd0       kube-scheduler-pause-464015
	3de9287727286       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   25 seconds ago       Running             kube-apiserver            2                   1ff59d2b8563f       kube-apiserver-pause-464015
	cc778cb13a1d8       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   25 seconds ago       Running             kube-controller-manager   2                   158bae2c06db0       kube-controller-manager-pause-464015
	b62ff99eeaf9e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   ac5a07f383795       etcd-pause-464015
	669aff584c28d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   38 seconds ago       Running             coredns                   1                   f4677d0834576       coredns-7db6d8ff4d-j6d5l
	49088d6b4a706       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   39 seconds ago       Running             kube-proxy                1                   6fc782ae0ba22       kube-proxy-6bztz
	841e7e305bb95       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   39 seconds ago       Exited              kube-apiserver            1                   1ff59d2b8563f       kube-apiserver-pause-464015
	4b7b16b0bd9c0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   39 seconds ago       Exited              kube-controller-manager   1                   158bae2c06db0       kube-controller-manager-pause-464015
	6edbae7cc2a33       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   39 seconds ago       Exited              kube-scheduler            1                   161d8d86c5dd0       kube-scheduler-pause-464015
	ff1daf93690fc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   39 seconds ago       Exited              etcd                      1                   ac5a07f383795       etcd-pause-464015
	e093776f38c90       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   d567aaa9c08c2       coredns-7db6d8ff4d-j6d5l
	f9fb15b1131c5       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   About a minute ago   Exited              kube-proxy                0                   cb68f6f36a8c2       kube-proxy-6bztz
	
	
	==> coredns [669aff584c28d163a60090c11bd6b3f339f049cdd24a2cff0277db48f8d25e63] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36279 - 53358 "HINFO IN 2526986688340446305.175117106379564337. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00961475s
	
	
	==> coredns [e093776f38c9029647c3f7c44eb0803af0bd1cb67b37112e7ee9594e860db6c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56523 - 4198 "HINFO IN 636979211025756034.7820473512945332659. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.00897904s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-464015
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-464015
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=pause-464015
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_31_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:31:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-464015
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:33:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:32:41 +0000   Mon, 29 Jul 2024 19:31:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.50
	  Hostname:    pause-464015
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 f6f81a3dca5b459e84987ee70e6a92a0
	  System UUID:                f6f81a3d-ca5b-459e-8498-7ee70e6a92a0
	  Boot ID:                    767e5d6c-01e0-44f1-b969-1706327dab4b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-j6d5l                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-pause-464015                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         112s
	  kube-system                 kube-apiserver-pause-464015             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-pause-464015    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-6bztz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-scheduler-pause-464015             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 94s                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  112s               kubelet          Node pause-464015 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  112s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    112s               kubelet          Node pause-464015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     112s               kubelet          Node pause-464015 status is now: NodeHasSufficientPID
	  Normal   Starting                 112s               kubelet          Starting kubelet.
	  Normal   NodeReady                111s               kubelet          Node pause-464015 status is now: NodeReady
	  Normal   RegisteredNode           99s                node-controller  Node pause-464015 event: Registered Node pause-464015 in Controller
	  Warning  ContainerGCFailed        52s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 27s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-464015 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-464015 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-464015 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11s                node-controller  Node pause-464015 event: Registered Node pause-464015 in Controller
	
	
	==> dmesg <==
	[  +0.059066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062101] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.192147] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.140008] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.279583] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[Jul29 19:31] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.062960] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.672450] systemd-fstab-generator[937]: Ignoring "noauto" option for root device
	[  +0.417821] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.626812] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +0.079854] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.421422] systemd-fstab-generator[1487]: Ignoring "noauto" option for root device
	[  +0.092305] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 19:32] systemd-fstab-generator[2143]: Ignoring "noauto" option for root device
	[  +0.089869] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.068059] systemd-fstab-generator[2155]: Ignoring "noauto" option for root device
	[  +0.212310] systemd-fstab-generator[2169]: Ignoring "noauto" option for root device
	[  +0.159003] systemd-fstab-generator[2181]: Ignoring "noauto" option for root device
	[  +0.332600] systemd-fstab-generator[2209]: Ignoring "noauto" option for root device
	[  +3.838800] systemd-fstab-generator[2337]: Ignoring "noauto" option for root device
	[  +0.521426] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.190497] kauditd_printk_skb: 63 callbacks suppressed
	[  +1.230515] systemd-fstab-generator[3107]: Ignoring "noauto" option for root device
	[ +11.463589] kauditd_printk_skb: 40 callbacks suppressed
	[  +5.871800] systemd-fstab-generator[3482]: Ignoring "noauto" option for root device
	
	
	==> etcd [b62ff99eeaf9e0454dbadae3cf4ec70b609fb7795068db3c8882b6eb083fccb0] <==
	{"level":"info","ts":"2024-07-29T19:32:38.650048Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6b98348baa467fce","local-member-id":"c0dcbd712fbd8799","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:32:38.650241Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:32:38.655249Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:32:38.658339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"c0dcbd712fbd8799","initial-advertise-peer-urls":["https://192.168.50.50:2380"],"listen-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.50:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:32:38.658395Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:32:38.65851Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:38.660198Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:39.60922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.60928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.609312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgPreVoteResp from c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:39.60933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgVoteResp from c0dcbd712fbd8799 at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became leader at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.609351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0dcbd712fbd8799 elected leader c0dcbd712fbd8799 at term 4"}
	{"level":"info","ts":"2024-07-29T19:32:39.61438Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:pause-464015 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:32:39.614515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:39.616127Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-07-29T19:32:39.616643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:39.618055Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:32:39.62723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:39.627276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-07-29T19:32:57.825865Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.511863ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9771000295659295559 > lease_revoke:<id:079990fffb02521c>","response":"size:28"}
	{"level":"info","ts":"2024-07-29T19:32:57.825959Z","caller":"traceutil/trace.go:171","msg":"trace[543841593] linearizableReadLoop","detail":"{readStateIndex:520; appliedIndex:519; }","duration":"134.90408ms","start":"2024-07-29T19:32:57.69104Z","end":"2024-07-29T19:32:57.825944Z","steps":["trace[543841593] 'read index received'  (duration: 34.833µs)","trace[543841593] 'applied index is now lower than readState.Index'  (duration: 134.868485ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T19:32:57.826037Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.983527ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:1 size:5426"}
	{"level":"info","ts":"2024-07-29T19:32:57.826051Z","caller":"traceutil/trace.go:171","msg":"trace[815592013] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:474; }","duration":"135.041041ms","start":"2024-07-29T19:32:57.691006Z","end":"2024-07-29T19:32:57.826047Z","steps":["trace[815592013] 'agreement among raft nodes before linearized reading'  (duration: 134.97749ms)"],"step_count":1}
	
	
	==> etcd [ff1daf93690fccd1dc759001f6697612af0553c0f96dcc83e9aa68ea5197ddbe] <==
	{"level":"info","ts":"2024-07-29T19:32:25.250061Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:26.153215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgPreVoteResp from c0dcbd712fbd8799 at term 2"}
	{"level":"info","ts":"2024-07-29T19:32:26.153404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 received MsgVoteResp from c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c0dcbd712fbd8799 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.153486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c0dcbd712fbd8799 elected leader c0dcbd712fbd8799 at term 3"}
	{"level":"info","ts":"2024-07-29T19:32:26.171564Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"c0dcbd712fbd8799","local-member-attributes":"{Name:pause-464015 ClientURLs:[https://192.168.50.50:2379]}","request-path":"/0/members/c0dcbd712fbd8799/attributes","cluster-id":"6b98348baa467fce","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:32:26.171816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:26.173215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:26.173261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:32:26.17328Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:32:26.183198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.50:2379"}
	{"level":"info","ts":"2024-07-29T19:32:26.19739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:32:36.032497Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T19:32:36.032581Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-464015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	{"level":"warn","ts":"2024-07-29T19:32:36.03272Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.032752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.033387Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T19:32:36.033493Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.50:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T19:32:36.033601Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c0dcbd712fbd8799","current-leader-member-id":"c0dcbd712fbd8799"}
	{"level":"info","ts":"2024-07-29T19:32:36.03757Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:36.037796Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.50:2380"}
	{"level":"info","ts":"2024-07-29T19:32:36.03791Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-464015","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.50:2380"],"advertise-client-urls":["https://192.168.50.50:2379"]}
	
	
	==> kernel <==
	 19:33:04 up 2 min,  0 users,  load average: 1.25, 0.45, 0.16
	Linux pause-464015 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3de9287727286378155d0686a12b796de28e3da99fbf812ff57e07a0d08aec7b] <==
	I0729 19:32:41.176081       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 19:32:41.176116       1 policy_source.go:224] refreshing policies
	I0729 19:32:41.176353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 19:32:41.205026       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 19:32:41.205090       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 19:32:41.205098       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 19:32:41.208419       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 19:32:41.209065       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 19:32:41.211908       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 19:32:41.212643       1 aggregator.go:165] initial CRD sync complete...
	I0729 19:32:41.212684       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 19:32:41.212693       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 19:32:41.212699       1 cache.go:39] Caches are synced for autoregister controller
	I0729 19:32:41.214450       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 19:32:41.214645       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 19:32:41.222754       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 19:32:42.008851       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 19:32:42.324395       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.50]
	I0729 19:32:42.326105       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 19:32:42.333445       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 19:32:42.514187       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 19:32:42.529429       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 19:32:42.571768       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 19:32:42.619982       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 19:32:42.629240       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e] <==
	I0729 19:32:28.909806       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 19:32:28.914410       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 19:32:28.914479       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 19:32:28.916555       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 19:32:28.916669       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 19:32:28.929125       1 controller.go:157] Shutting down quota evaluator
	I0729 19:32:28.929904       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.930594       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931232       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931277       1 controller.go:176] quota evaluator worker shutdown
	I0729 19:32:28.931292       1 controller.go:176] quota evaluator worker shutdown
	W0729 19:32:29.599550       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:29.599843       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:30.598834       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:30.599928       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:31.599607       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:31.599761       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:32.599080       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:32.599413       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:33.599113       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:33.599126       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0729 19:32:34.599383       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0729 19:32:34.599401       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0729 19:32:35.599407       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 19:32:35.600230       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-controller-manager [4b7b16b0bd9c0ed1a12aa9d3dbb0492cf3ecdd58f8cfdab74195a862cf719121] <==
	I0729 19:32:26.055431       1 serving.go:380] Generated self-signed cert in-memory
	I0729 19:32:27.339439       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 19:32:27.339486       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:27.345362       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 19:32:27.346540       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 19:32:27.346717       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:32:27.346850       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [cc778cb13a1d80930529267dbbe0e8cdf18a533cb71ba795706a640242ceb573] <==
	I0729 19:32:53.526187       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 19:32:53.530637       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 19:32:53.530750       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0729 19:32:53.533217       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 19:32:53.536856       1 shared_informer.go:320] Caches are synced for deployment
	I0729 19:32:53.538110       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 19:32:53.543132       1 shared_informer.go:320] Caches are synced for disruption
	I0729 19:32:53.547506       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 19:32:53.549097       1 shared_informer.go:320] Caches are synced for taint
	I0729 19:32:53.549398       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 19:32:53.549898       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-464015"
	I0729 19:32:53.551018       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 19:32:53.566349       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 19:32:53.582976       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 19:32:53.583662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="251.16µs"
	I0729 19:32:53.632647       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 19:32:53.654419       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0729 19:32:53.707415       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 19:32:53.710406       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 19:32:53.713251       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 19:32:53.716739       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:32:53.726766       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 19:32:54.152422       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:32:54.196690       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 19:32:54.196743       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [49088d6b4a706030cbf20d3804d012c617948033707d5ebd3b2043eb8c164d50] <==
	W0729 19:32:28.976809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:28.976884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:28.976970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:28.977030       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:29.911029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:29.911248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:30.024584       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:30.024651       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:30.215693       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:30.215838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:31.959021       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:31.959087       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:32.566768       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:32.566974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:32.933085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:32.933374       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:36.147003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:36.147061       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:36.378218       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:36.378320       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	W0729 19:32:38.182895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	E0729 19:32:38.182938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	I0729 19:32:47.375189       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:32:48.874993       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:32:49.376055       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f9fb15b1131c5ede4eddfa3701ce04f731640d804947e136fea265259ce58da5] <==
	I0729 19:31:29.678406       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:31:29.695665       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.50"]
	I0729 19:31:29.752024       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:31:29.752194       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:31:29.752242       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:31:29.755649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:31:29.756265       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:31:29.756316       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:31:29.759210       1 config.go:192] "Starting service config controller"
	I0729 19:31:29.759598       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:31:29.759664       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:31:29.759688       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:31:29.761943       1 config.go:319] "Starting node config controller"
	I0729 19:31:29.761979       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:31:29.859849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:31:29.859944       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:31:29.862114       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [20617830601df6c479f2994467411b171ad1552c3f05871c43583e65795db7d7] <==
	I0729 19:32:39.291293       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:32:41.100602       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:32:41.101113       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:32:41.101237       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:32:41.101263       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:32:41.124091       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:32:41.124123       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:41.129921       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:32:41.130027       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:41.131026       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:32:41.131099       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0729 19:32:41.139519       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:32:41.139581       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 19:32:42.630741       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4] <==
	I0729 19:32:27.386826       1 serving.go:380] Generated self-signed cert in-memory
	W0729 19:32:28.705806       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 19:32:28.706773       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:32:28.706897       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 19:32:28.706931       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 19:32:28.731943       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 19:32:28.732027       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:32:28.736323       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 19:32:28.737441       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 19:32:28.745766       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:28.737478       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 19:32:28.846400       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 19:32:35.891446       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 19:32:35.892089       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0729 19:32:35.892428       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0729 19:32:35.892922       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.164724    3114 scope.go:117] "RemoveContainer" containerID="841e7e305bb95c21975773a4d4d4e755792303dae99a116af2db97f4fe3f081e"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.167732    3114 scope.go:117] "RemoveContainer" containerID="6edbae7cc2a33063c7cceb1f30b8a07f80cc6b66241ce129b4a119cba77d5ee4"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.178428    3114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.50:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-464015.17e6c5f3a342ec3a  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-464015,UID:pause-464015,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-464015,},FirstTimestamp:2024-07-29 19:32:37.722590266 +0000 UTC m=+0.134556810,LastTimestamp:2024-07-29 19:32:37.722590266 +0000 UTC m=+0.134556810,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-464015,}"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.357631    3114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-464015?timeout=10s\": dial tcp 192.168.50.50:8443: connect: connection refused" interval="800ms"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: I0729 19:32:38.456299    3114 kubelet_node_status.go:73] "Attempting to register node" node="pause-464015"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.460302    3114 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.50:8443: connect: connection refused" node="pause-464015"
	Jul 29 19:32:38 pause-464015 kubelet[3114]: W0729 19:32:38.559856    3114 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	Jul 29 19:32:38 pause-464015 kubelet[3114]: E0729 19:32:38.559920    3114 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-464015&limit=500&resourceVersion=0": dial tcp 192.168.50.50:8443: connect: connection refused
	Jul 29 19:32:39 pause-464015 kubelet[3114]: I0729 19:32:39.262454    3114 kubelet_node_status.go:73] "Attempting to register node" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.196692    3114 kubelet_node_status.go:112] "Node was previously registered" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.197276    3114 kubelet_node_status.go:76] "Successfully registered node" node="pause-464015"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.199761    3114 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.201493    3114 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.220575    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.321725    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.422760    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.523639    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.624640    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: E0729 19:32:41.725398    3114 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"pause-464015\" not found"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.737211    3114 apiserver.go:52] "Watching apiserver"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.739710    3114 topology_manager.go:215] "Topology Admit Handler" podUID="e11c129e-19c1-460c-9d15-10a235d29e06" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j6d5l"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.740029    3114 topology_manager.go:215] "Topology Admit Handler" podUID="8eea26ce-59ee-46bd-a9c4-18477db50d96" podNamespace="kube-system" podName="kube-proxy-6bztz"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.839796    3114 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.914784    3114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eea26ce-59ee-46bd-a9c4-18477db50d96-xtables-lock\") pod \"kube-proxy-6bztz\" (UID: \"8eea26ce-59ee-46bd-a9c4-18477db50d96\") " pod="kube-system/kube-proxy-6bztz"
	Jul 29 19:32:41 pause-464015 kubelet[3114]: I0729 19:32:41.914882    3114 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eea26ce-59ee-46bd-a9c4-18477db50d96-lib-modules\") pod \"kube-proxy-6bztz\" (UID: \"8eea26ce-59ee-46bd-a9c4-18477db50d96\") " pod="kube-system/kube-proxy-6bztz"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:33:03.272732 1108904 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1055011/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-464015 -n pause-464015
helpers_test.go:261: (dbg) Run:  kubectl --context pause-464015 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (92.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (294.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.31462983s)

                                                
                                                
-- stdout --
	* [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:34:13.736434 1113418 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:34:13.736689 1113418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:34:13.736698 1113418 out.go:304] Setting ErrFile to fd 2...
	I0729 19:34:13.736702 1113418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:34:13.736910 1113418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:34:13.737542 1113418 out.go:298] Setting JSON to false
	I0729 19:34:13.738720 1113418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11806,"bootTime":1722269848,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:34:13.738778 1113418 start.go:139] virtualization: kvm guest
	I0729 19:34:13.741165 1113418 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:34:13.742422 1113418 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:34:13.742461 1113418 notify.go:220] Checking for updates...
	I0729 19:34:13.744563 1113418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:34:13.745725 1113418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:34:13.746753 1113418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:34:13.747760 1113418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:34:13.748750 1113418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:34:13.750396 1113418 config.go:182] Loaded profile config "bridge-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:34:13.750552 1113418 config.go:182] Loaded profile config "enable-default-cni-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:34:13.750685 1113418 config.go:182] Loaded profile config "flannel-184620": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:34:13.750801 1113418 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:34:13.789223 1113418 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:34:13.790390 1113418 start.go:297] selected driver: kvm2
	I0729 19:34:13.790438 1113418 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:34:13.790453 1113418 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:34:13.791607 1113418 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:34:13.791687 1113418 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:34:13.815033 1113418 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:34:13.815099 1113418 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:34:13.815336 1113418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:34:13.815402 1113418 cni.go:84] Creating CNI manager for ""
	I0729 19:34:13.815418 1113418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:34:13.815428 1113418 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:34:13.815520 1113418 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:34:13.815661 1113418 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:34:13.817424 1113418 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:34:13.818697 1113418 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:34:13.818748 1113418 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:34:13.818760 1113418 cache.go:56] Caching tarball of preloaded images
	I0729 19:34:13.818867 1113418 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:34:13.818880 1113418 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:34:13.819001 1113418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:34:13.819026 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json: {Name:mk956e741caefd34d37605eb1d444ddbc287a03b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:34:13.819187 1113418 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:34:30.247721 1113418 start.go:364] duration metric: took 16.428501352s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:34:30.247809 1113418 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:34:30.247934 1113418 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 19:34:30.249919 1113418 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 19:34:30.250165 1113418 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:34:30.250220 1113418 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:34:30.270615 1113418 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45379
	I0729 19:34:30.271102 1113418 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:34:30.271720 1113418 main.go:141] libmachine: Using API Version  1
	I0729 19:34:30.271754 1113418 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:34:30.272126 1113418 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:34:30.272346 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:34:30.272502 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:34:30.272668 1113418 start.go:159] libmachine.API.Create for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:34:30.272695 1113418 client.go:168] LocalClient.Create starting
	I0729 19:34:30.272728 1113418 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 19:34:30.272770 1113418 main.go:141] libmachine: Decoding PEM data...
	I0729 19:34:30.272792 1113418 main.go:141] libmachine: Parsing certificate...
	I0729 19:34:30.272879 1113418 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 19:34:30.272913 1113418 main.go:141] libmachine: Decoding PEM data...
	I0729 19:34:30.272933 1113418 main.go:141] libmachine: Parsing certificate...
	I0729 19:34:30.272958 1113418 main.go:141] libmachine: Running pre-create checks...
	I0729 19:34:30.272976 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .PreCreateCheck
	I0729 19:34:30.273363 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:34:30.273849 1113418 main.go:141] libmachine: Creating machine...
	I0729 19:34:30.273869 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .Create
	I0729 19:34:30.274008 1113418 main.go:141] libmachine: (old-k8s-version-021528) Creating KVM machine...
	I0729 19:34:30.275299 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found existing default KVM network
	I0729 19:34:30.277492 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:30.277300 1113637 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015930}
	I0729 19:34:30.277524 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | created network xml: 
	I0729 19:34:30.277541 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | <network>
	I0729 19:34:30.277550 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   <name>mk-old-k8s-version-021528</name>
	I0729 19:34:30.277562 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   <dns enable='no'/>
	I0729 19:34:30.277572 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   
	I0729 19:34:30.277585 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 19:34:30.277601 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |     <dhcp>
	I0729 19:34:30.277631 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 19:34:30.277654 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |     </dhcp>
	I0729 19:34:30.277678 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   </ip>
	I0729 19:34:30.277684 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG |   
	I0729 19:34:30.277695 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | </network>
	I0729 19:34:30.277706 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | 
	I0729 19:34:30.282635 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | trying to create private KVM network mk-old-k8s-version-021528 192.168.39.0/24...
	I0729 19:34:30.359701 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | private KVM network mk-old-k8s-version-021528 192.168.39.0/24 created
	I0729 19:34:30.359736 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:30.359666 1113637 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:34:30.359767 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528 ...
	I0729 19:34:30.359799 1113418 main.go:141] libmachine: (old-k8s-version-021528) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 19:34:30.359816 1113418 main.go:141] libmachine: (old-k8s-version-021528) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 19:34:30.634541 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:30.634410 1113637 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa...
	I0729 19:34:30.814817 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:30.814655 1113637 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/old-k8s-version-021528.rawdisk...
	I0729 19:34:30.814871 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Writing magic tar header
	I0729 19:34:30.814902 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Writing SSH key tar header
	I0729 19:34:30.814922 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:30.814815 1113637 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528 ...
	I0729 19:34:30.815051 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528
	I0729 19:34:30.815086 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 19:34:30.815101 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528 (perms=drwx------)
	I0729 19:34:30.815114 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 19:34:30.815120 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 19:34:30.815128 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 19:34:30.815141 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 19:34:30.815153 1113418 main.go:141] libmachine: (old-k8s-version-021528) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 19:34:30.815165 1113418 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:34:30.815178 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:34:30.815232 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 19:34:30.815267 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 19:34:30.815281 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home/jenkins
	I0729 19:34:30.815293 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Checking permissions on dir: /home
	I0729 19:34:30.815312 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Skipping /home - not owner
	I0729 19:34:30.816256 1113418 main.go:141] libmachine: (old-k8s-version-021528) define libvirt domain using xml: 
	I0729 19:34:30.816295 1113418 main.go:141] libmachine: (old-k8s-version-021528) <domain type='kvm'>
	I0729 19:34:30.816309 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <name>old-k8s-version-021528</name>
	I0729 19:34:30.816321 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <memory unit='MiB'>2200</memory>
	I0729 19:34:30.816328 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <vcpu>2</vcpu>
	I0729 19:34:30.816333 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <features>
	I0729 19:34:30.816338 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <acpi/>
	I0729 19:34:30.816342 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <apic/>
	I0729 19:34:30.816347 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <pae/>
	I0729 19:34:30.816362 1113418 main.go:141] libmachine: (old-k8s-version-021528)     
	I0729 19:34:30.816367 1113418 main.go:141] libmachine: (old-k8s-version-021528)   </features>
	I0729 19:34:30.816372 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <cpu mode='host-passthrough'>
	I0729 19:34:30.816390 1113418 main.go:141] libmachine: (old-k8s-version-021528)   
	I0729 19:34:30.816399 1113418 main.go:141] libmachine: (old-k8s-version-021528)   </cpu>
	I0729 19:34:30.816407 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <os>
	I0729 19:34:30.816414 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <type>hvm</type>
	I0729 19:34:30.816421 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <boot dev='cdrom'/>
	I0729 19:34:30.816426 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <boot dev='hd'/>
	I0729 19:34:30.816432 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <bootmenu enable='no'/>
	I0729 19:34:30.816438 1113418 main.go:141] libmachine: (old-k8s-version-021528)   </os>
	I0729 19:34:30.816446 1113418 main.go:141] libmachine: (old-k8s-version-021528)   <devices>
	I0729 19:34:30.816453 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <disk type='file' device='cdrom'>
	I0729 19:34:30.816465 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/boot2docker.iso'/>
	I0729 19:34:30.816472 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <target dev='hdc' bus='scsi'/>
	I0729 19:34:30.816480 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <readonly/>
	I0729 19:34:30.816487 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </disk>
	I0729 19:34:30.816495 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <disk type='file' device='disk'>
	I0729 19:34:30.816503 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 19:34:30.816516 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/old-k8s-version-021528.rawdisk'/>
	I0729 19:34:30.816523 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <target dev='hda' bus='virtio'/>
	I0729 19:34:30.816531 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </disk>
	I0729 19:34:30.816544 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <interface type='network'>
	I0729 19:34:30.816554 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <source network='mk-old-k8s-version-021528'/>
	I0729 19:34:30.816562 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <model type='virtio'/>
	I0729 19:34:30.816579 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </interface>
	I0729 19:34:30.816587 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <interface type='network'>
	I0729 19:34:30.816596 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <source network='default'/>
	I0729 19:34:30.816603 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <model type='virtio'/>
	I0729 19:34:30.816609 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </interface>
	I0729 19:34:30.816618 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <serial type='pty'>
	I0729 19:34:30.816649 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <target port='0'/>
	I0729 19:34:30.816668 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </serial>
	I0729 19:34:30.816679 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <console type='pty'>
	I0729 19:34:30.816687 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <target type='serial' port='0'/>
	I0729 19:34:30.816697 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </console>
	I0729 19:34:30.816705 1113418 main.go:141] libmachine: (old-k8s-version-021528)     <rng model='virtio'>
	I0729 19:34:30.816716 1113418 main.go:141] libmachine: (old-k8s-version-021528)       <backend model='random'>/dev/random</backend>
	I0729 19:34:30.816722 1113418 main.go:141] libmachine: (old-k8s-version-021528)     </rng>
	I0729 19:34:30.816730 1113418 main.go:141] libmachine: (old-k8s-version-021528)     
	I0729 19:34:30.816736 1113418 main.go:141] libmachine: (old-k8s-version-021528)     
	I0729 19:34:30.816756 1113418 main.go:141] libmachine: (old-k8s-version-021528)   </devices>
	I0729 19:34:30.816773 1113418 main.go:141] libmachine: (old-k8s-version-021528) </domain>
	I0729 19:34:30.816784 1113418 main.go:141] libmachine: (old-k8s-version-021528) 
	I0729 19:34:30.821067 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:7e:b4:fa in network default
	I0729 19:34:30.821679 1113418 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:34:30.821703 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:30.822524 1113418 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:34:30.822827 1113418 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:34:30.823413 1113418 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:34:30.824203 1113418 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:34:32.292906 1113418 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:34:32.293821 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:32.294348 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:32.294375 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:32.294330 1113637 retry.go:31] will retry after 210.617574ms: waiting for machine to come up
	I0729 19:34:32.510302 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:32.511984 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:32.512016 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:32.511953 1113637 retry.go:31] will retry after 370.681205ms: waiting for machine to come up
	I0729 19:34:32.885636 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:32.886736 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:32.886760 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:32.886649 1113637 retry.go:31] will retry after 390.095435ms: waiting for machine to come up
	I0729 19:34:33.279909 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:33.280354 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:33.280382 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:33.280322 1113637 retry.go:31] will retry after 387.880335ms: waiting for machine to come up
	I0729 19:34:33.672960 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:33.675113 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:33.675140 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:33.675013 1113637 retry.go:31] will retry after 688.254582ms: waiting for machine to come up
	I0729 19:34:34.364518 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:34.365099 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:34.365125 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:34.365019 1113637 retry.go:31] will retry after 758.544526ms: waiting for machine to come up
	I0729 19:34:35.125169 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:35.125646 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:35.125683 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:35.125608 1113637 retry.go:31] will retry after 737.485255ms: waiting for machine to come up
	I0729 19:34:35.865107 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:35.865711 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:35.865754 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:35.865620 1113637 retry.go:31] will retry after 1.420035355s: waiting for machine to come up
	I0729 19:34:37.287973 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:37.288668 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:37.288697 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:37.288564 1113637 retry.go:31] will retry after 1.648688258s: waiting for machine to come up
	I0729 19:34:38.938573 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:38.939181 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:38.939210 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:38.939129 1113637 retry.go:31] will retry after 1.912797583s: waiting for machine to come up
	I0729 19:34:40.853376 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:40.854035 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:40.854062 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:40.853931 1113637 retry.go:31] will retry after 2.744984198s: waiting for machine to come up
	I0729 19:34:43.743520 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:43.744117 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:43.744146 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:43.744077 1113637 retry.go:31] will retry after 2.621851369s: waiting for machine to come up
	I0729 19:34:46.367596 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:46.368158 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:46.368201 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:46.368104 1113637 retry.go:31] will retry after 4.090468469s: waiting for machine to come up
	I0729 19:34:50.460420 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:50.461001 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:34:50.461027 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:34:50.460962 1113637 retry.go:31] will retry after 5.492855236s: waiting for machine to come up
	I0729 19:34:55.955369 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:55.955907 1113418 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:34:55.955931 1113418 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:34:55.955960 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:55.956338 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528
	I0729 19:34:56.031369 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:34:56.031400 1113418 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:34:56.031432 1113418 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:34:56.034216 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:56.034549 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528
	I0729 19:34:56.034573 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find defined IP address of network mk-old-k8s-version-021528 interface with MAC address 52:54:00:12:c7:d2
	I0729 19:34:56.034737 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:34:56.034762 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:34:56.034870 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:34:56.034909 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:34:56.034933 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:34:56.038300 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: exit status 255: 
	I0729 19:34:56.038325 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 19:34:56.038335 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | command : exit 0
	I0729 19:34:56.038350 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | err     : exit status 255
	I0729 19:34:56.038363 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | output  : 
	I0729 19:34:59.039015 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:34:59.041658 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.042117 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.042145 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.042275 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:34:59.042313 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:34:59.042378 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:34:59.042408 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:34:59.042423 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:34:59.171182 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:34:59.171428 1113418 main.go:141] libmachine: (old-k8s-version-021528) KVM machine creation complete!
	I0729 19:34:59.171803 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:34:59.172407 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:34:59.172644 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:34:59.172810 1113418 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 19:34:59.172825 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:34:59.174198 1113418 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 19:34:59.174216 1113418 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 19:34:59.174223 1113418 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 19:34:59.174231 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.176686 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.177035 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.177062 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.177222 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.177415 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.177585 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.177730 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.177863 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:34:59.178064 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:34:59.178078 1113418 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 19:34:59.286271 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:34:59.286296 1113418 main.go:141] libmachine: Detecting the provisioner...
	I0729 19:34:59.286304 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.288950 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.289326 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.289355 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.289551 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.289747 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.289877 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.290000 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.290162 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:34:59.290342 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:34:59.290360 1113418 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 19:34:59.399526 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 19:34:59.399643 1113418 main.go:141] libmachine: found compatible host: buildroot
	I0729 19:34:59.399660 1113418 main.go:141] libmachine: Provisioning with buildroot...
	I0729 19:34:59.399671 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:34:59.399958 1113418 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:34:59.399986 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:34:59.400182 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.403243 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.403634 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.403661 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.403845 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.404035 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.404222 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.404354 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.404511 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:34:59.404753 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:34:59.404774 1113418 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:34:59.528939 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:34:59.528968 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.531922 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.532292 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.532315 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.532504 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.532728 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.532884 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.533033 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.533185 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:34:59.533353 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:34:59.533369 1113418 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:34:59.651226 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:34:59.651271 1113418 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:34:59.651289 1113418 buildroot.go:174] setting up certificates
	I0729 19:34:59.651297 1113418 provision.go:84] configureAuth start
	I0729 19:34:59.651306 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:34:59.651561 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:34:59.654388 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.654772 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.654800 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.654932 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.657129 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.657548 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.657570 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.657733 1113418 provision.go:143] copyHostCerts
	I0729 19:34:59.657814 1113418 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:34:59.657829 1113418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:34:59.657883 1113418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:34:59.657974 1113418 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:34:59.657981 1113418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:34:59.658002 1113418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:34:59.658063 1113418 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:34:59.658070 1113418 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:34:59.658086 1113418 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:34:59.658142 1113418 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:34:59.796497 1113418 provision.go:177] copyRemoteCerts
	I0729 19:34:59.796572 1113418 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:34:59.796605 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.799203 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.799574 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.799604 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.799805 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.799996 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.800134 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.800247 1113418 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:34:59.889485 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:34:59.914421 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:34:59.940494 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:34:59.967198 1113418 provision.go:87] duration metric: took 315.887721ms to configureAuth
	I0729 19:34:59.967224 1113418 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:34:59.967438 1113418 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:34:59.967534 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:34:59.970409 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.970783 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:34:59.970819 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:34:59.971086 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:34:59.971302 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.971451 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:34:59.971638 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:34:59.971850 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:34:59.972026 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:34:59.972041 1113418 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:35:00.254296 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:35:00.254328 1113418 main.go:141] libmachine: Checking connection to Docker...
	I0729 19:35:00.254340 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetURL
	I0729 19:35:00.255707 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using libvirt version 6000000
	I0729 19:35:00.257951 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.258354 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.258385 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.258572 1113418 main.go:141] libmachine: Docker is up and running!
	I0729 19:35:00.258592 1113418 main.go:141] libmachine: Reticulating splines...
	I0729 19:35:00.258601 1113418 client.go:171] duration metric: took 29.985895589s to LocalClient.Create
	I0729 19:35:00.258630 1113418 start.go:167] duration metric: took 29.985964236s to libmachine.API.Create "old-k8s-version-021528"
	I0729 19:35:00.258639 1113418 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:35:00.258650 1113418 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:35:00.258667 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:35:00.258931 1113418 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:35:00.258956 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:35:00.261399 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.261758 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.261783 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.261891 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:35:00.262084 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:35:00.262233 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:35:00.262470 1113418 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:35:00.350575 1113418 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:35:00.355210 1113418 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:35:00.355247 1113418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:35:00.355317 1113418 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:35:00.355438 1113418 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:35:00.355561 1113418 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:35:00.365845 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:35:00.392865 1113418 start.go:296] duration metric: took 134.207648ms for postStartSetup
	I0729 19:35:00.392934 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:35:00.393625 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:35:00.396571 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.397010 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.397041 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.397248 1113418 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:35:00.397480 1113418 start.go:128] duration metric: took 30.149531346s to createHost
	I0729 19:35:00.397507 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:35:00.400026 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.400390 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.400421 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.400585 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:35:00.400772 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:35:00.400966 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:35:00.401144 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:35:00.401378 1113418 main.go:141] libmachine: Using SSH client type: native
	I0729 19:35:00.401603 1113418 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:35:00.401632 1113418 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 19:35:00.520176 1113418 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722281700.506578621
	
	I0729 19:35:00.520218 1113418 fix.go:216] guest clock: 1722281700.506578621
	I0729 19:35:00.520225 1113418 fix.go:229] Guest: 2024-07-29 19:35:00.506578621 +0000 UTC Remote: 2024-07-29 19:35:00.397494773 +0000 UTC m=+46.696038150 (delta=109.083848ms)
	I0729 19:35:00.520257 1113418 fix.go:200] guest clock delta is within tolerance: 109.083848ms
	I0729 19:35:00.520268 1113418 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 30.272497001s
	I0729 19:35:00.520299 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:35:00.520579 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:35:00.523649 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.524025 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.524058 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.524185 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:35:00.524746 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:35:00.524944 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:35:00.525055 1113418 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:35:00.525110 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:35:00.525177 1113418 ssh_runner.go:195] Run: cat /version.json
	I0729 19:35:00.525199 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:35:00.527977 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.528305 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.528337 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.528358 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.528496 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:35:00.528692 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:35:00.528862 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:35:00.528912 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:00.528940 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:00.529015 1113418 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:35:00.529128 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:35:00.529329 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:35:00.529499 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:35:00.529684 1113418 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:35:00.639507 1113418 ssh_runner.go:195] Run: systemctl --version
	I0729 19:35:00.648375 1113418 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:35:00.821205 1113418 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:35:00.828048 1113418 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:35:00.828143 1113418 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:35:00.846281 1113418 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:35:00.846318 1113418 start.go:495] detecting cgroup driver to use...
	I0729 19:35:00.846405 1113418 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:35:00.864526 1113418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:35:00.883716 1113418 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:35:00.883780 1113418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:35:00.899233 1113418 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:35:00.915912 1113418 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:35:01.047810 1113418 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:35:01.231307 1113418 docker.go:233] disabling docker service ...
	I0729 19:35:01.231381 1113418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:35:01.247031 1113418 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:35:01.262229 1113418 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:35:01.397618 1113418 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:35:01.537000 1113418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:35:01.554183 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:35:01.574722 1113418 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:35:01.574780 1113418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:35:01.585521 1113418 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:35:01.585610 1113418 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:35:01.596463 1113418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:35:01.609747 1113418 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:35:01.621499 1113418 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:35:01.632696 1113418 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:35:01.642719 1113418 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:35:01.642811 1113418 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:35:01.657422 1113418 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:35:01.667492 1113418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:35:01.791811 1113418 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:35:01.949310 1113418 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:35:01.949400 1113418 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:35:01.955527 1113418 start.go:563] Will wait 60s for crictl version
	I0729 19:35:01.955597 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:01.959937 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:35:02.001607 1113418 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:35:02.001714 1113418 ssh_runner.go:195] Run: crio --version
	I0729 19:35:02.032220 1113418 ssh_runner.go:195] Run: crio --version
	I0729 19:35:02.064085 1113418 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:35:02.065166 1113418 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:35:02.068371 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:02.068754 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:34:47 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:35:02.068785 1113418 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:35:02.069051 1113418 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:35:02.073167 1113418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:35:02.086617 1113418 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:35:02.086765 1113418 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:35:02.086826 1113418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:35:02.124555 1113418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:35:02.124624 1113418 ssh_runner.go:195] Run: which lz4
	I0729 19:35:02.129189 1113418 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 19:35:02.134215 1113418 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:35:02.134258 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:35:03.903816 1113418 crio.go:462] duration metric: took 1.774666249s to copy over tarball
	I0729 19:35:03.903900 1113418 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:35:07.009196 1113418 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.105259532s)
	I0729 19:35:07.009241 1113418 crio.go:469] duration metric: took 3.10539516s to extract the tarball
	I0729 19:35:07.009252 1113418 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:35:07.053305 1113418 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:35:07.102437 1113418 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:35:07.102468 1113418 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:35:07.102565 1113418 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:35:07.102589 1113418 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.102630 1113418 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.102641 1113418 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.102665 1113418 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.102821 1113418 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.102605 1113418 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.102827 1113418 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:35:07.104022 1113418 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.104090 1113418 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:35:07.104089 1113418 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.104094 1113418 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.104097 1113418 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.104163 1113418 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.104097 1113418 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.104635 1113418 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:35:07.265012 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.272810 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.281020 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.285683 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.288164 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.317114 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.348748 1113418 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:35:07.348796 1113418 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.348851 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.358573 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:35:07.371246 1113418 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:35:07.442411 1113418 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:35:07.442464 1113418 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.442474 1113418 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:35:07.442513 1113418 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.442528 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.442556 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.476462 1113418 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:35:07.476521 1113418 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.476524 1113418 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:35:07.476562 1113418 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.476578 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.476625 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.525954 1113418 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:35:07.525999 1113418 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.526002 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.526019 1113418 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:35:07.526038 1113418 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:35:07.526040 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.526066 1113418 ssh_runner.go:195] Run: which crictl
	I0729 19:35:07.609311 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.609344 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.609406 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.609421 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.609466 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:35:07.609497 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.609528 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.787347 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.787452 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.787470 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.787571 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:35:07.787650 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:35:07.787666 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.787895 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.943213 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:35:07.943259 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:35:07.943268 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:35:07.943333 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:35:07.943367 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:35:07.943427 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:35:07.943449 1113418 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:35:08.054210 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:35:08.081511 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:35:08.081519 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:35:08.081601 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:35:08.081637 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:35:08.088730 1113418 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:35:08.088792 1113418 cache_images.go:92] duration metric: took 986.295877ms to LoadCachedImages
	W0729 19:35:08.088888 1113418 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:35:08.088907 1113418 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:35:08.089063 1113418 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:35:08.089133 1113418 ssh_runner.go:195] Run: crio config
	I0729 19:35:08.135999 1113418 cni.go:84] Creating CNI manager for ""
	I0729 19:35:08.136033 1113418 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:35:08.136045 1113418 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:35:08.136072 1113418 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:35:08.136261 1113418 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:35:08.136333 1113418 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:35:08.147350 1113418 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:35:08.147430 1113418 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:35:08.158167 1113418 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:35:08.180245 1113418 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:35:08.201691 1113418 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:35:08.223296 1113418 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:35:08.231015 1113418 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:35:08.248209 1113418 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:35:08.394076 1113418 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:35:08.412408 1113418 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:35:08.412437 1113418 certs.go:194] generating shared ca certs ...
	I0729 19:35:08.412457 1113418 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.412637 1113418 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:35:08.412699 1113418 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:35:08.412715 1113418 certs.go:256] generating profile certs ...
	I0729 19:35:08.412792 1113418 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:35:08.412812 1113418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.crt with IP's: []
	I0729 19:35:08.588200 1113418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.crt ...
	I0729 19:35:08.588233 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.crt: {Name:mk79265f7e4e8f2a1c7cae0f161ed1f49b6bd9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.606500 1113418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key ...
	I0729 19:35:08.606539 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key: {Name:mka8afd28e0df6b10f22b32fd80519fe2c75f176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.606769 1113418 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:35:08.606800 1113418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt.1bfec4c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.65]
	I0729 19:35:08.840337 1113418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt.1bfec4c5 ...
	I0729 19:35:08.840379 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt.1bfec4c5: {Name:mk4c3889f19377860e930a0b13606dc585d817fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.847678 1113418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5 ...
	I0729 19:35:08.847719 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5: {Name:mk614014e9bfe4f8e185a039ab5c41025e55c6a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.847852 1113418 certs.go:381] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt.1bfec4c5 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt
	I0729 19:35:08.847957 1113418 certs.go:385] copying /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key
	I0729 19:35:08.848043 1113418 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:35:08.848081 1113418 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt with IP's: []
	I0729 19:35:08.948029 1113418 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt ...
	I0729 19:35:08.948064 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt: {Name:mk00791235583fe81d37a14428f36c2b931f2c01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.948292 1113418 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key ...
	I0729 19:35:08.948314 1113418 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key: {Name:mk58dc7332a8f3ed921a1ae3bf4d49011f43ddba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:35:08.948594 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:35:08.948649 1113418 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:35:08.948665 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:35:08.948700 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:35:08.948736 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:35:08.948768 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:35:08.948823 1113418 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:35:08.949478 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:35:08.979303 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:35:09.006372 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:35:09.032911 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:35:09.059602 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:35:09.088266 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:35:09.117719 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:35:09.144822 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:35:09.172052 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:35:09.200147 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:35:09.226286 1113418 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:35:09.257033 1113418 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:35:09.285562 1113418 ssh_runner.go:195] Run: openssl version
	I0729 19:35:09.294478 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:35:09.309696 1113418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:35:09.314952 1113418 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:35:09.315042 1113418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:35:09.322333 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:35:09.343868 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:35:09.363290 1113418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:35:09.371126 1113418 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:35:09.371205 1113418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:35:09.380576 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:35:09.396003 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:35:09.410150 1113418 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:35:09.415887 1113418 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:35:09.415966 1113418 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:35:09.422023 1113418 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:35:09.435122 1113418 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:35:09.440030 1113418 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 19:35:09.440105 1113418 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:35:09.440214 1113418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:35:09.440290 1113418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:35:09.484761 1113418 cri.go:89] found id: ""
	I0729 19:35:09.484854 1113418 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:35:09.497709 1113418 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:35:09.512948 1113418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:35:09.528085 1113418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:35:09.528116 1113418 kubeadm.go:157] found existing configuration files:
	
	I0729 19:35:09.528173 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:35:09.540151 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:35:09.540229 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:35:09.550546 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:35:09.560169 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:35:09.560248 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:35:09.570994 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:35:09.581443 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:35:09.581524 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:35:09.593207 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:35:09.605053 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:35:09.605129 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:35:09.615110 1113418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:35:09.925548 1113418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:37:08.839483 1113418 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:37:08.839577 1113418 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:37:08.841265 1113418 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:37:08.841328 1113418 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:37:08.841438 1113418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:37:08.841571 1113418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:37:08.841721 1113418 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:37:08.841825 1113418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:37:08.843251 1113418 out.go:204]   - Generating certificates and keys ...
	I0729 19:37:08.843347 1113418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:37:08.843438 1113418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:37:08.843522 1113418 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 19:37:08.843596 1113418 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 19:37:08.843655 1113418 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 19:37:08.843698 1113418 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 19:37:08.843742 1113418 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 19:37:08.843889 1113418 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0729 19:37:08.843957 1113418 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 19:37:08.844118 1113418 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	I0729 19:37:08.844176 1113418 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 19:37:08.844266 1113418 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 19:37:08.844322 1113418 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 19:37:08.844404 1113418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:37:08.844475 1113418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:37:08.844526 1113418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:37:08.844622 1113418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:37:08.844707 1113418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:37:08.844845 1113418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:37:08.844921 1113418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:37:08.844978 1113418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:37:08.845076 1113418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:37:08.846472 1113418 out.go:204]   - Booting up control plane ...
	I0729 19:37:08.846564 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:37:08.846648 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:37:08.846732 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:37:08.846822 1113418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:37:08.846998 1113418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:37:08.847045 1113418 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:37:08.847102 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:08.847283 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:08.847394 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:08.847618 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:08.847692 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:08.847884 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:08.847984 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:08.848240 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:08.848303 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:08.848456 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:08.848463 1113418 kubeadm.go:310] 
	I0729 19:37:08.848513 1113418 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:37:08.848571 1113418 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:37:08.848581 1113418 kubeadm.go:310] 
	I0729 19:37:08.848629 1113418 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:37:08.848677 1113418 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:37:08.848779 1113418 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:37:08.848792 1113418 kubeadm.go:310] 
	I0729 19:37:08.848935 1113418 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:37:08.848986 1113418 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:37:08.849033 1113418 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:37:08.849042 1113418 kubeadm.go:310] 
	I0729 19:37:08.849192 1113418 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:37:08.849313 1113418 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:37:08.849323 1113418 kubeadm.go:310] 
	I0729 19:37:08.849448 1113418 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:37:08.849566 1113418 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:37:08.849637 1113418 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:37:08.849696 1113418 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:37:08.849732 1113418 kubeadm.go:310] 
	W0729 19:37:08.849841 1113418 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-021528] and IPs [192.168.39.65 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:37:08.849899 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:37:11.125730 1113418 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.275798259s)
	I0729 19:37:11.125828 1113418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:37:11.141216 1113418 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:37:11.152076 1113418 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:37:11.152098 1113418 kubeadm.go:157] found existing configuration files:
	
	I0729 19:37:11.152144 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:37:11.162492 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:37:11.162556 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:37:11.173631 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:37:11.183034 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:37:11.183109 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:37:11.192831 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:37:11.202396 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:37:11.202459 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:37:11.212157 1113418 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:37:11.221578 1113418 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:37:11.221649 1113418 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:37:11.234622 1113418 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:37:11.318047 1113418 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:37:11.318171 1113418 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:37:11.461420 1113418 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:37:11.461551 1113418 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:37:11.461658 1113418 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:37:11.641300 1113418 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:37:11.643199 1113418 out.go:204]   - Generating certificates and keys ...
	I0729 19:37:11.643311 1113418 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:37:11.643400 1113418 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:37:11.643513 1113418 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:37:11.643624 1113418 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:37:11.643745 1113418 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:37:11.643828 1113418 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:37:11.643939 1113418 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:37:11.644364 1113418 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:37:11.644687 1113418 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:37:11.644989 1113418 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:37:11.645090 1113418 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:37:11.645186 1113418 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:37:11.834348 1113418 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:37:11.929750 1113418 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:37:12.113097 1113418 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:37:12.190944 1113418 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:37:12.208918 1113418 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:37:12.210120 1113418 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:37:12.210179 1113418 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:37:12.395641 1113418 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:37:12.397266 1113418 out.go:204]   - Booting up control plane ...
	I0729 19:37:12.397413 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:37:12.404969 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:37:12.408985 1113418 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:37:12.409102 1113418 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:37:12.411300 1113418 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:37:52.412766 1113418 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:37:52.413014 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:52.413225 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:37:57.413843 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:37:57.414073 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:38:07.414613 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:38:07.414933 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:38:27.415750 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:38:27.415942 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:39:07.415244 1113418 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:39:07.415512 1113418 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:39:07.415522 1113418 kubeadm.go:310] 
	I0729 19:39:07.415556 1113418 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:39:07.415619 1113418 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:39:07.415645 1113418 kubeadm.go:310] 
	I0729 19:39:07.415695 1113418 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:39:07.415737 1113418 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:39:07.415877 1113418 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:39:07.415890 1113418 kubeadm.go:310] 
	I0729 19:39:07.416039 1113418 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:39:07.416086 1113418 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:39:07.416132 1113418 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:39:07.416143 1113418 kubeadm.go:310] 
	I0729 19:39:07.416369 1113418 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:39:07.416462 1113418 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:39:07.416477 1113418 kubeadm.go:310] 
	I0729 19:39:07.416609 1113418 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:39:07.416695 1113418 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:39:07.416815 1113418 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:39:07.416895 1113418 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:39:07.416906 1113418 kubeadm.go:310] 
	I0729 19:39:07.418102 1113418 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:39:07.418221 1113418 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:39:07.418318 1113418 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:39:07.418402 1113418 kubeadm.go:394] duration metric: took 3m57.97830366s to StartCluster
	I0729 19:39:07.418469 1113418 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:39:07.418527 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:39:07.461297 1113418 cri.go:89] found id: ""
	I0729 19:39:07.461332 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.461340 1113418 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:39:07.461357 1113418 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:39:07.461415 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:39:07.494969 1113418 cri.go:89] found id: ""
	I0729 19:39:07.495002 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.495009 1113418 logs.go:278] No container was found matching "etcd"
	I0729 19:39:07.495016 1113418 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:39:07.495075 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:39:07.528137 1113418 cri.go:89] found id: ""
	I0729 19:39:07.528166 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.528173 1113418 logs.go:278] No container was found matching "coredns"
	I0729 19:39:07.528179 1113418 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:39:07.528232 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:39:07.560947 1113418 cri.go:89] found id: ""
	I0729 19:39:07.560973 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.560980 1113418 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:39:07.560987 1113418 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:39:07.561049 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:39:07.593607 1113418 cri.go:89] found id: ""
	I0729 19:39:07.593639 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.593648 1113418 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:39:07.593657 1113418 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:39:07.593711 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:39:07.626423 1113418 cri.go:89] found id: ""
	I0729 19:39:07.626452 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.626460 1113418 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:39:07.626466 1113418 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:39:07.626524 1113418 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:39:07.665053 1113418 cri.go:89] found id: ""
	I0729 19:39:07.665080 1113418 logs.go:276] 0 containers: []
	W0729 19:39:07.665087 1113418 logs.go:278] No container was found matching "kindnet"
	I0729 19:39:07.665096 1113418 logs.go:123] Gathering logs for kubelet ...
	I0729 19:39:07.665107 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:39:07.717897 1113418 logs.go:123] Gathering logs for dmesg ...
	I0729 19:39:07.717937 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:39:07.732905 1113418 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:39:07.732932 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:39:07.842732 1113418 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:39:07.842759 1113418 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:39:07.842777 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:39:07.945203 1113418 logs.go:123] Gathering logs for container status ...
	I0729 19:39:07.945255 1113418 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:39:07.996675 1113418 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:39:07.996734 1113418 out.go:239] * 
	* 
	W0729 19:39:07.996808 1113418 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:39:07.996841 1113418 out.go:239] * 
	* 
	W0729 19:39:07.998193 1113418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:39:08.001849 1113418 out.go:177] 
	W0729 19:39:08.003111 1113418 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:39:08.003183 1113418 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:39:08.003215 1113418 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:39:08.004807 1113418 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 6 (227.862075ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:08.286425 1120072 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-021528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (294.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-843792 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-843792 --alsologtostderr -v=3: exit status 82 (2m0.499525491s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-843792"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:36:22.335320 1118905 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:36:22.335452 1118905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:36:22.335463 1118905 out.go:304] Setting ErrFile to fd 2...
	I0729 19:36:22.335470 1118905 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:36:22.335669 1118905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:36:22.335906 1118905 out.go:298] Setting JSON to false
	I0729 19:36:22.335976 1118905 mustload.go:65] Loading cluster: no-preload-843792
	I0729 19:36:22.336314 1118905 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:36:22.336380 1118905 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:36:22.336538 1118905 mustload.go:65] Loading cluster: no-preload-843792
	I0729 19:36:22.336679 1118905 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:36:22.336716 1118905 stop.go:39] StopHost: no-preload-843792
	I0729 19:36:22.337213 1118905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:36:22.337312 1118905 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:36:22.352379 1118905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0729 19:36:22.352865 1118905 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:36:22.353459 1118905 main.go:141] libmachine: Using API Version  1
	I0729 19:36:22.353486 1118905 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:36:22.353848 1118905 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:36:22.355587 1118905 out.go:177] * Stopping node "no-preload-843792"  ...
	I0729 19:36:22.356988 1118905 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 19:36:22.357021 1118905 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:36:22.357267 1118905 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 19:36:22.357296 1118905 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:36:22.360110 1118905 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:36:22.360529 1118905 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:35:17 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:36:22.360548 1118905 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:36:22.360721 1118905 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:36:22.360883 1118905 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:36:22.361072 1118905 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:36:22.361211 1118905 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:36:22.460967 1118905 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 19:36:22.520315 1118905 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 19:36:22.580575 1118905 main.go:141] libmachine: Stopping "no-preload-843792"...
	I0729 19:36:22.580646 1118905 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:36:22.582669 1118905 main.go:141] libmachine: (no-preload-843792) Calling .Stop
	I0729 19:36:22.586919 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 0/120
	I0729 19:36:23.588531 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 1/120
	I0729 19:36:24.590722 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 2/120
	I0729 19:36:25.592403 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 3/120
	I0729 19:36:26.594082 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 4/120
	I0729 19:36:27.596231 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 5/120
	I0729 19:36:28.597801 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 6/120
	I0729 19:36:29.599407 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 7/120
	I0729 19:36:30.600886 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 8/120
	I0729 19:36:31.602526 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 9/120
	I0729 19:36:32.604937 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 10/120
	I0729 19:36:33.606323 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 11/120
	I0729 19:36:34.608096 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 12/120
	I0729 19:36:35.609550 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 13/120
	I0729 19:36:36.611362 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 14/120
	I0729 19:36:37.613139 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 15/120
	I0729 19:36:38.614297 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 16/120
	I0729 19:36:39.616433 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 17/120
	I0729 19:36:40.618107 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 18/120
	I0729 19:36:41.619597 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 19/120
	I0729 19:36:42.621188 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 20/120
	I0729 19:36:43.622521 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 21/120
	I0729 19:36:44.623882 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 22/120
	I0729 19:36:45.625662 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 23/120
	I0729 19:36:46.627042 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 24/120
	I0729 19:36:47.629713 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 25/120
	I0729 19:36:48.631701 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 26/120
	I0729 19:36:49.633002 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 27/120
	I0729 19:36:50.634614 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 28/120
	I0729 19:36:51.636010 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 29/120
	I0729 19:36:52.638133 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 30/120
	I0729 19:36:53.639628 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 31/120
	I0729 19:36:54.641013 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 32/120
	I0729 19:36:55.642539 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 33/120
	I0729 19:36:56.643913 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 34/120
	I0729 19:36:57.645708 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 35/120
	I0729 19:36:58.647199 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 36/120
	I0729 19:36:59.648691 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 37/120
	I0729 19:37:00.650876 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 38/120
	I0729 19:37:01.652355 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 39/120
	I0729 19:37:02.654391 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 40/120
	I0729 19:37:03.655802 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 41/120
	I0729 19:37:04.657336 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 42/120
	I0729 19:37:05.658665 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 43/120
	I0729 19:37:06.660162 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 44/120
	I0729 19:37:07.662136 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 45/120
	I0729 19:37:08.663691 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 46/120
	I0729 19:37:09.665208 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 47/120
	I0729 19:37:10.666625 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 48/120
	I0729 19:37:11.668029 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 49/120
	I0729 19:37:12.669997 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 50/120
	I0729 19:37:13.671447 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 51/120
	I0729 19:37:14.673326 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 52/120
	I0729 19:37:15.675051 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 53/120
	I0729 19:37:16.677362 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 54/120
	I0729 19:37:17.679626 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 55/120
	I0729 19:37:18.680936 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 56/120
	I0729 19:37:19.682270 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 57/120
	I0729 19:37:20.683844 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 58/120
	I0729 19:37:21.685432 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 59/120
	I0729 19:37:22.687786 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 60/120
	I0729 19:37:23.689399 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 61/120
	I0729 19:37:24.691070 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 62/120
	I0729 19:37:25.692435 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 63/120
	I0729 19:37:26.693658 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 64/120
	I0729 19:37:27.695591 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 65/120
	I0729 19:37:28.697152 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 66/120
	I0729 19:37:29.698373 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 67/120
	I0729 19:37:30.699655 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 68/120
	I0729 19:37:31.701166 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 69/120
	I0729 19:37:32.703280 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 70/120
	I0729 19:37:33.704527 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 71/120
	I0729 19:37:34.705716 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 72/120
	I0729 19:37:35.707163 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 73/120
	I0729 19:37:36.708361 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 74/120
	I0729 19:37:37.710203 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 75/120
	I0729 19:37:38.712141 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 76/120
	I0729 19:37:39.713443 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 77/120
	I0729 19:37:40.714925 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 78/120
	I0729 19:37:41.716357 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 79/120
	I0729 19:37:42.718634 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 80/120
	I0729 19:37:43.719944 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 81/120
	I0729 19:37:44.721383 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 82/120
	I0729 19:37:45.722609 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 83/120
	I0729 19:37:46.723994 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 84/120
	I0729 19:37:47.725787 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 85/120
	I0729 19:37:48.727251 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 86/120
	I0729 19:37:49.728571 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 87/120
	I0729 19:37:50.729954 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 88/120
	I0729 19:37:51.731232 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 89/120
	I0729 19:37:52.733125 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 90/120
	I0729 19:37:53.734533 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 91/120
	I0729 19:37:54.735867 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 92/120
	I0729 19:37:55.737284 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 93/120
	I0729 19:37:56.738561 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 94/120
	I0729 19:37:57.740710 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 95/120
	I0729 19:37:58.741950 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 96/120
	I0729 19:37:59.743382 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 97/120
	I0729 19:38:00.744676 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 98/120
	I0729 19:38:01.746139 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 99/120
	I0729 19:38:02.748442 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 100/120
	I0729 19:38:03.749791 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 101/120
	I0729 19:38:04.752058 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 102/120
	I0729 19:38:05.753326 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 103/120
	I0729 19:38:06.754534 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 104/120
	I0729 19:38:07.756391 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 105/120
	I0729 19:38:08.757661 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 106/120
	I0729 19:38:09.759054 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 107/120
	I0729 19:38:10.760463 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 108/120
	I0729 19:38:11.761757 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 109/120
	I0729 19:38:12.763854 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 110/120
	I0729 19:38:13.765188 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 111/120
	I0729 19:38:14.766515 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 112/120
	I0729 19:38:15.768019 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 113/120
	I0729 19:38:16.769424 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 114/120
	I0729 19:38:17.771562 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 115/120
	I0729 19:38:18.772962 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 116/120
	I0729 19:38:19.774359 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 117/120
	I0729 19:38:20.775741 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 118/120
	I0729 19:38:21.777034 1118905 main.go:141] libmachine: (no-preload-843792) Waiting for machine to stop 119/120
	I0729 19:38:22.778096 1118905 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 19:38:22.778156 1118905 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 19:38:22.780131 1118905 out.go:177] 
	W0729 19:38:22.781403 1118905 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 19:38:22.781416 1118905 out.go:239] * 
	* 
	W0729 19:38:22.785715 1118905 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:38:22.787063 1118905 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-843792 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
E0729 19:38:26.650360 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:36.891275 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792: exit status 3 (18.482867844s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:38:41.271192 1119711 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host
	E0729 19:38:41.271212 1119711 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-843792" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-358053 --alsologtostderr -v=3
E0729 19:36:56.609980 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.615276 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.625500 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.646144 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.686431 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.766757 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:56.927055 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:57.184490 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 19:36:57.247746 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:57.887940 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:36:59.168835 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:37:01.729711 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:37:06.850384 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-358053 --alsologtostderr -v=3: exit status 82 (2m0.52849129s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-358053"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:36:48.488492 1119172 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:36:48.488677 1119172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:36:48.488692 1119172 out.go:304] Setting ErrFile to fd 2...
	I0729 19:36:48.488698 1119172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:36:48.489030 1119172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:36:48.489947 1119172 out.go:298] Setting JSON to false
	I0729 19:36:48.490036 1119172 mustload.go:65] Loading cluster: embed-certs-358053
	I0729 19:36:48.490923 1119172 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:36:48.491057 1119172 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:36:48.491278 1119172 mustload.go:65] Loading cluster: embed-certs-358053
	I0729 19:36:48.491408 1119172 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:36:48.491452 1119172 stop.go:39] StopHost: embed-certs-358053
	I0729 19:36:48.491911 1119172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:36:48.491969 1119172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:36:48.509367 1119172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46429
	I0729 19:36:48.514783 1119172 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:36:48.515976 1119172 main.go:141] libmachine: Using API Version  1
	I0729 19:36:48.516036 1119172 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:36:48.516408 1119172 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:36:48.518785 1119172 out.go:177] * Stopping node "embed-certs-358053"  ...
	I0729 19:36:48.520113 1119172 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 19:36:48.520148 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:36:48.520416 1119172 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 19:36:48.520447 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:36:48.523845 1119172 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:36:48.524334 1119172 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:35:53 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:36:48.524362 1119172 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:36:48.524489 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:36:48.524678 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:36:48.524825 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:36:48.524982 1119172 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:36:48.625648 1119172 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 19:36:48.685772 1119172 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 19:36:48.766822 1119172 main.go:141] libmachine: Stopping "embed-certs-358053"...
	I0729 19:36:48.766966 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:36:48.768924 1119172 main.go:141] libmachine: (embed-certs-358053) Calling .Stop
	I0729 19:36:48.773115 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 0/120
	I0729 19:36:49.774535 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 1/120
	I0729 19:36:50.776341 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 2/120
	I0729 19:36:51.777889 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 3/120
	I0729 19:36:52.779398 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 4/120
	I0729 19:36:53.781472 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 5/120
	I0729 19:36:54.783005 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 6/120
	I0729 19:36:55.784366 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 7/120
	I0729 19:36:56.785727 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 8/120
	I0729 19:36:57.786929 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 9/120
	I0729 19:36:58.788138 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 10/120
	I0729 19:36:59.789957 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 11/120
	I0729 19:37:00.791287 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 12/120
	I0729 19:37:01.792704 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 13/120
	I0729 19:37:02.794040 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 14/120
	I0729 19:37:03.795985 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 15/120
	I0729 19:37:04.797338 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 16/120
	I0729 19:37:05.798747 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 17/120
	I0729 19:37:06.799993 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 18/120
	I0729 19:37:07.801298 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 19/120
	I0729 19:37:08.803400 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 20/120
	I0729 19:37:09.805318 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 21/120
	I0729 19:37:10.806743 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 22/120
	I0729 19:37:11.808321 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 23/120
	I0729 19:37:12.809780 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 24/120
	I0729 19:37:13.811154 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 25/120
	I0729 19:37:14.813641 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 26/120
	I0729 19:37:15.815307 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 27/120
	I0729 19:37:16.817153 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 28/120
	I0729 19:37:17.818764 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 29/120
	I0729 19:37:18.821115 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 30/120
	I0729 19:37:19.823096 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 31/120
	I0729 19:37:20.824789 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 32/120
	I0729 19:37:21.826200 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 33/120
	I0729 19:37:22.827682 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 34/120
	I0729 19:37:23.829977 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 35/120
	I0729 19:37:24.831589 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 36/120
	I0729 19:37:25.832895 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 37/120
	I0729 19:37:26.834106 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 38/120
	I0729 19:37:27.835349 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 39/120
	I0729 19:37:28.837467 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 40/120
	I0729 19:37:29.838797 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 41/120
	I0729 19:37:30.840108 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 42/120
	I0729 19:37:31.841397 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 43/120
	I0729 19:37:32.842697 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 44/120
	I0729 19:37:33.844554 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 45/120
	I0729 19:37:34.845850 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 46/120
	I0729 19:37:35.847092 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 47/120
	I0729 19:37:36.848555 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 48/120
	I0729 19:37:37.849908 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 49/120
	I0729 19:37:38.851909 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 50/120
	I0729 19:37:39.853201 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 51/120
	I0729 19:37:40.854443 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 52/120
	I0729 19:37:41.855723 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 53/120
	I0729 19:37:42.857007 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 54/120
	I0729 19:37:43.858984 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 55/120
	I0729 19:37:44.860363 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 56/120
	I0729 19:37:45.861608 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 57/120
	I0729 19:37:46.863199 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 58/120
	I0729 19:37:47.864796 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 59/120
	I0729 19:37:48.866973 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 60/120
	I0729 19:37:49.868239 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 61/120
	I0729 19:37:50.869939 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 62/120
	I0729 19:37:51.871207 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 63/120
	I0729 19:37:52.872584 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 64/120
	I0729 19:37:53.874339 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 65/120
	I0729 19:37:54.875631 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 66/120
	I0729 19:37:55.877389 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 67/120
	I0729 19:37:56.878934 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 68/120
	I0729 19:37:57.880251 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 69/120
	I0729 19:37:58.882502 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 70/120
	I0729 19:37:59.883741 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 71/120
	I0729 19:38:00.885408 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 72/120
	I0729 19:38:01.886756 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 73/120
	I0729 19:38:02.888172 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 74/120
	I0729 19:38:03.890289 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 75/120
	I0729 19:38:04.891670 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 76/120
	I0729 19:38:05.893346 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 77/120
	I0729 19:38:06.894679 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 78/120
	I0729 19:38:07.896161 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 79/120
	I0729 19:38:08.898627 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 80/120
	I0729 19:38:09.900128 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 81/120
	I0729 19:38:10.901360 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 82/120
	I0729 19:38:11.902778 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 83/120
	I0729 19:38:12.904155 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 84/120
	I0729 19:38:13.906437 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 85/120
	I0729 19:38:14.907625 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 86/120
	I0729 19:38:15.909246 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 87/120
	I0729 19:38:16.910656 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 88/120
	I0729 19:38:17.912446 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 89/120
	I0729 19:38:18.914591 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 90/120
	I0729 19:38:19.915832 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 91/120
	I0729 19:38:20.917194 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 92/120
	I0729 19:38:21.919644 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 93/120
	I0729 19:38:22.921104 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 94/120
	I0729 19:38:23.923241 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 95/120
	I0729 19:38:24.924693 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 96/120
	I0729 19:38:25.925854 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 97/120
	I0729 19:38:26.927354 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 98/120
	I0729 19:38:27.928712 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 99/120
	I0729 19:38:28.931012 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 100/120
	I0729 19:38:29.932503 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 101/120
	I0729 19:38:30.933671 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 102/120
	I0729 19:38:31.935210 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 103/120
	I0729 19:38:32.937315 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 104/120
	I0729 19:38:33.938747 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 105/120
	I0729 19:38:34.939994 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 106/120
	I0729 19:38:35.941125 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 107/120
	I0729 19:38:36.942438 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 108/120
	I0729 19:38:37.943851 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 109/120
	I0729 19:38:38.946211 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 110/120
	I0729 19:38:39.947491 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 111/120
	I0729 19:38:40.949320 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 112/120
	I0729 19:38:41.950619 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 113/120
	I0729 19:38:42.952070 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 114/120
	I0729 19:38:43.954065 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 115/120
	I0729 19:38:44.955433 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 116/120
	I0729 19:38:45.957059 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 117/120
	I0729 19:38:46.958414 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 118/120
	I0729 19:38:47.959704 1119172 main.go:141] libmachine: (embed-certs-358053) Waiting for machine to stop 119/120
	I0729 19:38:48.960576 1119172 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 19:38:48.960650 1119172 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 19:38:48.962641 1119172 out.go:177] 
	W0729 19:38:48.963762 1119172 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 19:38:48.963779 1119172 out.go:239] * 
	* 
	W0729 19:38:48.967710 1119172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:38:48.968950 1119172 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-358053 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
E0729 19:38:49.252073 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053: exit status 3 (18.669302908s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:07.639179 1119872 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host
	E0729 19:39:07.639198 1119872 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-358053" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-024652 --alsologtostderr -v=3
E0729 19:37:35.028411 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.033753 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.043997 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.064339 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.104670 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.185278 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.345676 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:35.666318 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:36.306886 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:37.571464 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:37:37.587650 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:40.148331 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:45.269372 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:37:55.510035 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:38:00.968928 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 19:38:15.990625 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:38:16.410695 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.415977 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.426238 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.446578 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.486940 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.567311 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:16.727843 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:17.047974 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:17.688179 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:18.532477 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:38:18.968915 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:38:21.529293 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-024652 --alsologtostderr -v=3: exit status 82 (2m0.469540271s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-024652"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:37:24.872814 1119473 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:37:24.873098 1119473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:37:24.873109 1119473 out.go:304] Setting ErrFile to fd 2...
	I0729 19:37:24.873116 1119473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:37:24.873333 1119473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:37:24.873584 1119473 out.go:298] Setting JSON to false
	I0729 19:37:24.873684 1119473 mustload.go:65] Loading cluster: default-k8s-diff-port-024652
	I0729 19:37:24.874022 1119473 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:37:24.874106 1119473 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:37:24.874293 1119473 mustload.go:65] Loading cluster: default-k8s-diff-port-024652
	I0729 19:37:24.874428 1119473 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:37:24.874477 1119473 stop.go:39] StopHost: default-k8s-diff-port-024652
	I0729 19:37:24.874961 1119473 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:37:24.875017 1119473 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:37:24.889605 1119473 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35031
	I0729 19:37:24.890064 1119473 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:37:24.890684 1119473 main.go:141] libmachine: Using API Version  1
	I0729 19:37:24.890710 1119473 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:37:24.891109 1119473 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:37:24.893477 1119473 out.go:177] * Stopping node "default-k8s-diff-port-024652"  ...
	I0729 19:37:24.894702 1119473 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 19:37:24.894725 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:37:24.894951 1119473 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 19:37:24.894990 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:37:24.897831 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:37:24.898291 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:36:30 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:37:24.898322 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:37:24.898487 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:37:24.898735 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:37:24.898952 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:37:24.899113 1119473 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:37:24.992473 1119473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 19:37:25.056380 1119473 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 19:37:25.097867 1119473 main.go:141] libmachine: Stopping "default-k8s-diff-port-024652"...
	I0729 19:37:25.097910 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:37:25.099850 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Stop
	I0729 19:37:25.103682 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 0/120
	I0729 19:37:26.105139 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 1/120
	I0729 19:37:27.106428 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 2/120
	I0729 19:37:28.107988 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 3/120
	I0729 19:37:29.109322 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 4/120
	I0729 19:37:30.111448 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 5/120
	I0729 19:37:31.113084 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 6/120
	I0729 19:37:32.114354 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 7/120
	I0729 19:37:33.115633 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 8/120
	I0729 19:37:34.117017 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 9/120
	I0729 19:37:35.118966 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 10/120
	I0729 19:37:36.120395 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 11/120
	I0729 19:37:37.121749 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 12/120
	I0729 19:37:38.123095 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 13/120
	I0729 19:37:39.124429 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 14/120
	I0729 19:37:40.126033 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 15/120
	I0729 19:37:41.128029 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 16/120
	I0729 19:37:42.129356 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 17/120
	I0729 19:37:43.131385 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 18/120
	I0729 19:37:44.132739 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 19/120
	I0729 19:37:45.134887 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 20/120
	I0729 19:37:46.136969 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 21/120
	I0729 19:37:47.138295 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 22/120
	I0729 19:37:48.140023 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 23/120
	I0729 19:37:49.141226 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 24/120
	I0729 19:37:50.143321 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 25/120
	I0729 19:37:51.144597 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 26/120
	I0729 19:37:52.146050 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 27/120
	I0729 19:37:53.147508 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 28/120
	I0729 19:37:54.148802 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 29/120
	I0729 19:37:55.150978 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 30/120
	I0729 19:37:56.152327 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 31/120
	I0729 19:37:57.153931 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 32/120
	I0729 19:37:58.155264 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 33/120
	I0729 19:37:59.156745 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 34/120
	I0729 19:38:00.158842 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 35/120
	I0729 19:38:01.160255 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 36/120
	I0729 19:38:02.161744 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 37/120
	I0729 19:38:03.163142 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 38/120
	I0729 19:38:04.164607 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 39/120
	I0729 19:38:05.166723 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 40/120
	I0729 19:38:06.168048 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 41/120
	I0729 19:38:07.169638 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 42/120
	I0729 19:38:08.171415 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 43/120
	I0729 19:38:09.172745 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 44/120
	I0729 19:38:10.174636 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 45/120
	I0729 19:38:11.175979 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 46/120
	I0729 19:38:12.177283 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 47/120
	I0729 19:38:13.178665 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 48/120
	I0729 19:38:14.180017 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 49/120
	I0729 19:38:15.182378 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 50/120
	I0729 19:38:16.184068 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 51/120
	I0729 19:38:17.185380 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 52/120
	I0729 19:38:18.186858 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 53/120
	I0729 19:38:19.188117 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 54/120
	I0729 19:38:20.190060 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 55/120
	I0729 19:38:21.191458 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 56/120
	I0729 19:38:22.192800 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 57/120
	I0729 19:38:23.194074 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 58/120
	I0729 19:38:24.195459 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 59/120
	I0729 19:38:25.196676 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 60/120
	I0729 19:38:26.198112 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 61/120
	I0729 19:38:27.199377 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 62/120
	I0729 19:38:28.200863 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 63/120
	I0729 19:38:29.202331 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 64/120
	I0729 19:38:30.204344 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 65/120
	I0729 19:38:31.205716 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 66/120
	I0729 19:38:32.207492 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 67/120
	I0729 19:38:33.209045 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 68/120
	I0729 19:38:34.210999 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 69/120
	I0729 19:38:35.212946 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 70/120
	I0729 19:38:36.214383 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 71/120
	I0729 19:38:37.215766 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 72/120
	I0729 19:38:38.217296 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 73/120
	I0729 19:38:39.219032 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 74/120
	I0729 19:38:40.221146 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 75/120
	I0729 19:38:41.222653 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 76/120
	I0729 19:38:42.224040 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 77/120
	I0729 19:38:43.225570 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 78/120
	I0729 19:38:44.226778 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 79/120
	I0729 19:38:45.228966 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 80/120
	I0729 19:38:46.230491 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 81/120
	I0729 19:38:47.231863 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 82/120
	I0729 19:38:48.233310 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 83/120
	I0729 19:38:49.234738 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 84/120
	I0729 19:38:50.236707 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 85/120
	I0729 19:38:51.238120 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 86/120
	I0729 19:38:52.239719 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 87/120
	I0729 19:38:53.241113 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 88/120
	I0729 19:38:54.242663 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 89/120
	I0729 19:38:55.244853 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 90/120
	I0729 19:38:56.246363 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 91/120
	I0729 19:38:57.247779 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 92/120
	I0729 19:38:58.249202 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 93/120
	I0729 19:38:59.250593 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 94/120
	I0729 19:39:00.252495 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 95/120
	I0729 19:39:01.253889 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 96/120
	I0729 19:39:02.255159 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 97/120
	I0729 19:39:03.256426 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 98/120
	I0729 19:39:04.257856 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 99/120
	I0729 19:39:05.260287 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 100/120
	I0729 19:39:06.261721 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 101/120
	I0729 19:39:07.263038 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 102/120
	I0729 19:39:08.265253 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 103/120
	I0729 19:39:09.266488 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 104/120
	I0729 19:39:10.268328 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 105/120
	I0729 19:39:11.269873 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 106/120
	I0729 19:39:12.271274 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 107/120
	I0729 19:39:13.272729 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 108/120
	I0729 19:39:14.273983 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 109/120
	I0729 19:39:15.275905 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 110/120
	I0729 19:39:16.277382 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 111/120
	I0729 19:39:17.278578 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 112/120
	I0729 19:39:18.279867 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 113/120
	I0729 19:39:19.281346 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 114/120
	I0729 19:39:20.283171 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 115/120
	I0729 19:39:21.284489 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 116/120
	I0729 19:39:22.285891 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 117/120
	I0729 19:39:23.287344 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 118/120
	I0729 19:39:24.289613 1119473 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for machine to stop 119/120
	I0729 19:39:25.290996 1119473 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 19:39:25.291073 1119473 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 19:39:25.293140 1119473 out.go:177] 
	W0729 19:39:25.294610 1119473 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 19:39:25.294634 1119473 out.go:239] * 
	* 
	W0729 19:39:25.298538 1119473 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:39:25.299717 1119473 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-024652 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
E0729 19:39:35.044475 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:38.333287 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:39:40.453429 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652: exit status 3 (18.433395373s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:43.735196 1120347 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host
	E0729 19:39:43.735218 1120347 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-024652" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
E0729 19:38:44.131626 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.136921 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.147171 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.167382 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.207646 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.288030 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792: exit status 3 (3.167517621s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:38:44.439211 1119807 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host
	E0729 19:38:44.439233 1119807 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-843792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 19:38:44.448723 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:44.769372 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:45.410447 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:46.690864 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-843792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154242313s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-843792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792: exit status 3 (3.06153095s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:38:53.655314 1119902 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host
	E0729 19:38:53.655344 1119902 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.248:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-843792" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053: exit status 3 (3.16757577s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:10.807177 1120040 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host
	E0729 19:39:10.807199 1120040 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-358053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0729 19:39:14.564539 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.569795 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.580092 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.600333 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.640554 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.720841 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:14.881283 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:15.201519 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:15.841713 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-358053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154004075s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-358053 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
E0729 19:39:17.122819 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:19.683032 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053: exit status 3 (3.06200023s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:20.023377 1120250 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host
	E0729 19:39:20.023399 1120250 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.201:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-358053" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-021528 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-021528 create -f testdata/busybox.yaml: exit status 1 (42.902184ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-021528" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-021528 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 6 (215.252034ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:08.546481 1120111 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-021528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 6 (214.843029ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:08.761389 1120140 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-021528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-021528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-021528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m45.438654731s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-021528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-021528 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-021528 describe deploy/metrics-server -n kube-system: exit status 1 (44.826ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-021528" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-021528 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 6 (216.133561ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:40:54.461517 1120835 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-021528" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (105.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652: exit status 3 (3.167930842s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:46.903274 1120442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host
	E0729 19:39:46.903298 1120442 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-024652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-024652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153140502s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-024652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
E0729 19:39:55.525501 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652: exit status 3 (3.062347623s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 19:39:56.119267 1120541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host
	E0729 19:39:56.119297 1120541 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.100:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-024652" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (703.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0729 19:41:00.254841 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:41:06.918144 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:41:23.430626 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:41:27.398369 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:41:27.975232 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:41:56.609909 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:41:58.407529 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:42:08.359056 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:42:24.294564 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:42:35.028942 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:42:45.351817 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:43:00.968880 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 19:43:02.712408 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:43:16.410942 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:43:30.280378 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:43:44.095779 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:43:44.131022 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:44:11.816390 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:44:14.563977 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:44:24.014237 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 19:44:42.248740 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:45:01.508965 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:45:29.192805 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:45:34.135504 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 19:45:46.437779 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:46:14.121869 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:46:56.610654 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:47:35.028402 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:48:00.968893 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 19:48:16.410877 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:48:44.130957 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:49:14.564757 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m40.29846052s)

                                                
                                                
-- stdout --
	* [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	* 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	* 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-021528 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (233.821871ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25: (1.504742956s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:52:39 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:39.989937009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722282759989916513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd7f2004-85e6-4246-9b92-88ccf477cc21 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:39 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:39.990381262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=692afacd-8178-452f-b922-d9eb8d79c072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:39 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:39.990447722Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=692afacd-8178-452f-b922-d9eb8d79c072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:39 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:39.990488973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=692afacd-8178-452f-b922-d9eb8d79c072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.021606919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deb8a72f-8b06-4bb7-9673-df200c90fd28 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.021705913Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deb8a72f-8b06-4bb7-9673-df200c90fd28 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.024145644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58c2d409-b9f5-424d-8303-0d691a09dcbe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.024537777Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722282760024512749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58c2d409-b9f5-424d-8303-0d691a09dcbe name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.025232535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cd3e690-c915-47c6-8c57-ce7f7634759d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.025305059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cd3e690-c915-47c6-8c57-ce7f7634759d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.025337575Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9cd3e690-c915-47c6-8c57-ce7f7634759d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.058373574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c12349a7-0952-418c-9dad-41a3654e6b22 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.058473279Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c12349a7-0952-418c-9dad-41a3654e6b22 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.059507537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7aa75dc-5308-4465-ac40-5642e61711c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.060544917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722282760060351434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7aa75dc-5308-4465-ac40-5642e61711c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.062589119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe8f01a5-4748-4daa-b297-d7bb906dd03a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.063004973Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe8f01a5-4748-4daa-b297-d7bb906dd03a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.063185478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fe8f01a5-4748-4daa-b297-d7bb906dd03a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.097952533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a512f981-343a-4341-a44e-dfa07b1dc48e name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.098023111Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a512f981-343a-4341-a44e-dfa07b1dc48e name=/runtime.v1.RuntimeService/Version
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.099122783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=460a9cc4-9ab9-479e-85c3-8c87d8724427 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.099493204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722282760099471662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=460a9cc4-9ab9-479e-85c3-8c87d8724427 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.100148448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efcfffee-a13e-4001-bff8-7de70a46c3a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.100200850Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efcfffee-a13e-4001-bff8-7de70a46c3a2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:52:40 old-k8s-version-021528 crio[648]: time="2024-07-29 19:52:40.100230567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=efcfffee-a13e-4001-bff8-7de70a46c3a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 19:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042985] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.117270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.594595] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.059829] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057895] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.197592] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.124559] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.248534] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.328570] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.064370] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.920147] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +12.715960] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:48] systemd-fstab-generator[5089]: Ignoring "noauto" option for root device
	[Jul29 19:50] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.071408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:52:40 up 8 min,  0 users,  load average: 0.08, 0.10, 0.06
	Linux old-k8s-version-021528 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000cbfa20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000191c80, 0x24, 0x60, 0x7fe2124c3a50, 0x118, ...)
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: net/http.(*Transport).dial(0xc000682000, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000191c80, 0x24, 0x0, 0x0, 0x0, ...)
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: net/http.(*Transport).dialConn(0xc000682000, 0x4f7fe00, 0xc000120018, 0x0, 0xc00038c600, 0x5, 0xc000191c80, 0x24, 0x0, 0xc00072c120, ...)
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: net/http.(*Transport).dialConnFor(0xc000682000, 0xc000a8dad0)
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: created by net/http.(*Transport).queueForDial
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: goroutine 163 [select]:
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0002f6420, 0xc0004ecd00, 0xc000101d40, 0xc000101ce0)
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]: created by net.(*netFD).connect
	Jul 29 19:52:37 old-k8s-version-021528 kubelet[5546]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Jul 29 19:52:37 old-k8s-version-021528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 19:52:37 old-k8s-version-021528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 19:52:37 old-k8s-version-021528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jul 29 19:52:37 old-k8s-version-021528 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 19:52:37 old-k8s-version-021528 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 19:52:38 old-k8s-version-021528 kubelet[5597]: I0729 19:52:38.060021    5597 server.go:416] Version: v1.20.0
	Jul 29 19:52:38 old-k8s-version-021528 kubelet[5597]: I0729 19:52:38.060257    5597 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 19:52:38 old-k8s-version-021528 kubelet[5597]: I0729 19:52:38.062247    5597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 19:52:38 old-k8s-version-021528 kubelet[5597]: W0729 19:52:38.063136    5597 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 19:52:38 old-k8s-version-021528 kubelet[5597]: I0729 19:52:38.063821    5597 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (247.094163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-021528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (703.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-358053 -n embed-certs-358053
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:58:15.241338992 +0000 UTC m=+6067.476182387
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-358053 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-358053 logs -n 25: (2.024108042s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.696653638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283096696601779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e33f9270-28a4-47ce-b3cb-ff48359fa971 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.697102470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e50e671-3a1a-41aa-8515-92fa4b2bcf40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.697170484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e50e671-3a1a-41aa-8515-92fa4b2bcf40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.697408017Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e50e671-3a1a-41aa-8515-92fa4b2bcf40 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.731782588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36c28739-31cc-4a48-b566-c919c8df1239 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.731867784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36c28739-31cc-4a48-b566-c919c8df1239 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.733557456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640ab4f4-d2f6-4b71-8ecf-0b45d2963ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.734353960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283096734263811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640ab4f4-d2f6-4b71-8ecf-0b45d2963ae0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.734901403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9db3132b-d06d-4838-9904-7ec28063bc31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.734973951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9db3132b-d06d-4838-9904-7ec28063bc31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.735151736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9db3132b-d06d-4838-9904-7ec28063bc31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.770665081Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4588766f-ee6e-4088-86d7-d95d9c87c442 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.770758281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4588766f-ee6e-4088-86d7-d95d9c87c442 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.778425831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=695fa3da-9ce4-464d-97f9-e34a27d5a2fb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.778824757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283096778806986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=695fa3da-9ce4-464d-97f9-e34a27d5a2fb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.779254893Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c77152c-0ea7-4b76-a9d1-64e238a25c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.779377178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c77152c-0ea7-4b76-a9d1-64e238a25c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.779546513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c77152c-0ea7-4b76-a9d1-64e238a25c5d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.812784834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3ad5426-88e7-4d43-9aa9-652c054d4885 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.812874711Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3ad5426-88e7-4d43-9aa9-652c054d4885 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.814471541Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1708c71-ef17-4733-99f5-f5a245e85bd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.814970421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283096814945706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1708c71-ef17-4733-99f5-f5a245e85bd9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.816166230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=416214c4-3643-49e2-9094-07bf8a64d8ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.816352940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=416214c4-3643-49e2-9094-07bf8a64d8ac name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:16 embed-certs-358053 crio[729]: time="2024-07-29 19:58:16.816690196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=416214c4-3643-49e2-9094-07bf8a64d8ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1281c537c6df1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   44a921ee0c2d6       storage-provisioner
	9de6d84f7d47e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   c1048e65290aa       kube-proxy-phmxr
	cce8dbbbfa9e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   ee62d1e0dc372       coredns-7db6d8ff4d-rnpqh
	b4205bd7d4850       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1f78dc9468baf       coredns-7db6d8ff4d-62wzl
	aee4a8eb84295       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   1375928e902a6       kube-scheduler-embed-certs-358053
	bfa838a5a4f41       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   e4c720b5d8563       etcd-embed-certs-358053
	b3f1b2259bc6f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   3188d7c2d4250       kube-controller-manager-embed-certs-358053
	556c56bb813dc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   c4ef49cafc0f8       kube-apiserver-embed-certs-358053
	
	
	==> coredns [b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-358053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-358053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=embed-certs-358053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:48:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-358053
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:58:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:54:22 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:54:22 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:54:22 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:54:22 +0000   Mon, 29 Jul 2024 19:48:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.201
	  Hostname:    embed-certs-358053
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 919a77fc406c42cbb736d1f4923e4fb9
	  System UUID:                919a77fc-406c-42cb-b736-d1f4923e4fb9
	  Boot ID:                    3e28f549-6640-4789-bb10-01996f19b359
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-62wzl                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-rnpqh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-358053                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-embed-certs-358053             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-embed-certs-358053    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-phmxr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-358053             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-gpz72               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node embed-certs-358053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node embed-certs-358053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node embed-certs-358053 event: Registered Node embed-certs-358053 in Controller
	
	
	==> dmesg <==
	[  +0.050101] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752717] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.409276] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.578997] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.963264] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.059140] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067037] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.223536] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.135166] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.307770] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.346772] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.064090] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.787843] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[Jul29 19:44] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.788197] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 19:48] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.298091] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +4.540629] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.503562] systemd-fstab-generator[3927]: Ignoring "noauto" option for root device
	[Jul29 19:49] systemd-fstab-generator[4154]: Ignoring "noauto" option for root device
	[  +0.118311] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:50] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4] <==
	{"level":"info","ts":"2024-07-29T19:48:50.948849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef switched to configuration voters=(17294067604685744367)"}
	{"level":"info","ts":"2024-07-29T19:48:50.949078Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"334af0e9e11f35f3","local-member-id":"f000dedbcae268ef","added-peer-id":"f000dedbcae268ef","added-peer-peer-urls":["https://192.168.61.201:2380"]}
	{"level":"info","ts":"2024-07-29T19:48:50.957586Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:48:50.97154Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f000dedbcae268ef","initial-advertise-peer-urls":["https://192.168.61.201:2380"],"listen-peer-urls":["https://192.168.61.201:2380"],"advertise-client-urls":["https://192.168.61.201:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.201:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:48:50.973328Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:48:50.959434Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.201:2380"}
	{"level":"info","ts":"2024-07-29T19:48:50.976446Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.201:2380"}
	{"level":"info","ts":"2024-07-29T19:48:51.268019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef received MsgPreVoteResp from f000dedbcae268ef at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef received MsgVoteResp from f000dedbcae268ef at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f000dedbcae268ef elected leader f000dedbcae268ef at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.272916Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f000dedbcae268ef","local-member-attributes":"{Name:embed-certs-358053 ClientURLs:[https://192.168.61.201:2379]}","request-path":"/0/members/f000dedbcae268ef/attributes","cluster-id":"334af0e9e11f35f3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:48:51.273047Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:48:51.276092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:48:51.278366Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.293489Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:48:51.301351Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:48:51.318373Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"334af0e9e11f35f3","local-member-id":"f000dedbcae268ef","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.31873Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.319531Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.320098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.201:2379"}
	{"level":"info","ts":"2024-07-29T19:48:51.320321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:58:17 up 14 min,  0 users,  load average: 0.20, 0.24, 0.26
	Linux embed-certs-358053 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06] <==
	I0729 19:52:12.289975       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:53:53.006021       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:53:53.006185       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:53:54.006925       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 19:53:54.007062       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:53:54.007197       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:53:54.007208       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 19:53:54.007153       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:53:54.009417       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:54:54.007359       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:54:54.007420       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:54:54.007428       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:54:54.009723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:54:54.009850       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:54:54.009877       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:56:54.008472       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:56:54.008560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:56:54.008568       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:56:54.010695       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:56:54.010820       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:56:54.010849       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551] <==
	I0729 19:52:39.747509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:53:09.120088       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:53:09.755492       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:53:39.125693       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:53:39.763771       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:54:09.131083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:54:09.773521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:54:39.135224       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:54:39.780807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:55:04.655075       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="309.95µs"
	E0729 19:55:09.140343       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:55:09.788854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:55:17.652652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="95.991µs"
	E0729 19:55:39.145781       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:55:39.799089       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:56:09.151744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:56:09.807696       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:56:39.157623       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:56:39.816254       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:09.167697       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:57:09.824151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:39.172846       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:57:39.832635       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:58:09.179605       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:58:09.841855       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55] <==
	I0729 19:49:12.312326       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:49:12.323877       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.201"]
	I0729 19:49:12.370503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:49:12.370596       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:49:12.370632       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:49:12.373556       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:49:12.374017       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:49:12.374084       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:49:12.376186       1 config.go:192] "Starting service config controller"
	I0729 19:49:12.376545       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:49:12.376663       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:49:12.376745       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:49:12.378529       1 config.go:319] "Starting node config controller"
	I0729 19:49:12.378556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:49:12.477502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:49:12.477562       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:49:12.478966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b] <==
	E0729 19:48:53.034051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:48:53.034058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.034083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:48:53.034098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.034126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:48:53.034143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:48:53.851023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:48:53.851072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:48:53.885520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.885568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:48:54.034922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:48:54.035033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:48:54.042621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:48:54.042740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 19:48:54.057441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:48:54.057771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:48:54.076694       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:48:54.078011       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:48:54.080404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:54.080471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:48:54.172265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:48:54.172384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:48:54.178049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:48:54.178096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 19:48:56.427556       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:55:55 embed-certs-358053 kubelet[3934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:55:55 embed-certs-358053 kubelet[3934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:55:55 embed-certs-358053 kubelet[3934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:55:56 embed-certs-358053 kubelet[3934]: E0729 19:55:56.637954    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:56:09 embed-certs-358053 kubelet[3934]: E0729 19:56:09.638152    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:56:21 embed-certs-358053 kubelet[3934]: E0729 19:56:21.636723    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:56:32 embed-certs-358053 kubelet[3934]: E0729 19:56:32.637798    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:56:43 embed-certs-358053 kubelet[3934]: E0729 19:56:43.636660    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:56:55 embed-certs-358053 kubelet[3934]: E0729 19:56:55.660595    3934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:56:55 embed-certs-358053 kubelet[3934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:56:55 embed-certs-358053 kubelet[3934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:56:55 embed-certs-358053 kubelet[3934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:56:55 embed-certs-358053 kubelet[3934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:56:58 embed-certs-358053 kubelet[3934]: E0729 19:56:58.637467    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:57:11 embed-certs-358053 kubelet[3934]: E0729 19:57:11.638827    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:57:23 embed-certs-358053 kubelet[3934]: E0729 19:57:23.637366    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:57:34 embed-certs-358053 kubelet[3934]: E0729 19:57:34.637329    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:57:46 embed-certs-358053 kubelet[3934]: E0729 19:57:46.636804    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:57:55 embed-certs-358053 kubelet[3934]: E0729 19:57:55.660567    3934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:57:55 embed-certs-358053 kubelet[3934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:57:55 embed-certs-358053 kubelet[3934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:57:55 embed-certs-358053 kubelet[3934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:57:55 embed-certs-358053 kubelet[3934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:58:01 embed-certs-358053 kubelet[3934]: E0729 19:58:01.639489    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 19:58:15 embed-certs-358053 kubelet[3934]: E0729 19:58:15.637531    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	
	
	==> storage-provisioner [1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77] <==
	I0729 19:49:12.261091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:49:12.271312       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:49:12.271395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:49:12.282494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:49:12.282655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf!
	I0729 19:49:12.283660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8d00ce0-d8bf-4c95-9f65-334fbcbb3efa", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf became leader
	I0729 19:49:12.383511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-358053 -n embed-certs-358053
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-358053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gpz72
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72: exit status 1 (65.204623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gpz72" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 19:50:01.508639 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:58:37.181396011 +0000 UTC m=+6089.416239397
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-024652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-024652 logs -n 25: (2.068484516s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.702335946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283118702315301,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2ce9629-434a-498a-b050-b364eec3bd28 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.703176895Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a0789ea-acde-46ac-a778-8a2ee912a8a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.703225265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a0789ea-acde-46ac-a778-8a2ee912a8a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.703402833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a0789ea-acde-46ac-a778-8a2ee912a8a7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.741170887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfd79882-18c5-4bf3-abea-38921950610f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.741243814Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfd79882-18c5-4bf3-abea-38921950610f name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.742656965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4051c38-16fb-423c-9a51-aaf20166c05b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.743092530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283118743065037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4051c38-16fb-423c-9a51-aaf20166c05b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.743861272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb395d75-c9f1-4f8f-9eef-71b04d1237e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.743917056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb395d75-c9f1-4f8f-9eef-71b04d1237e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.744124211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb395d75-c9f1-4f8f-9eef-71b04d1237e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.779934734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3ac6ddb-e391-4ef4-95ff-b069aef1d07c name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.780025023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3ac6ddb-e391-4ef4-95ff-b069aef1d07c name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.781034370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53e80812-9b38-40ca-a9d2-b9972a97aa68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.781423420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283118781402592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53e80812-9b38-40ca-a9d2-b9972a97aa68 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.782094438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9beccd55-6538-4e93-8c5a-98530a744a76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.782167748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9beccd55-6538-4e93-8c5a-98530a744a76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.782349290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9beccd55-6538-4e93-8c5a-98530a744a76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.815276036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cb01dc2-2e30-4d5c-9ee1-28e26309fb24 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.815373851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cb01dc2-2e30-4d5c-9ee1-28e26309fb24 name=/runtime.v1.RuntimeService/Version
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.816432340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=423a62f2-05e7-4a11-9f0f-e044e4584698 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.816898913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283118816873280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=423a62f2-05e7-4a11-9f0f-e044e4584698 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.817422167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41136acb-da53-4c2c-9b56-c3c253376919 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.817482651Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41136acb-da53-4c2c-9b56-c3c253376919 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:58:38 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 19:58:38.817953989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41136acb-da53-4c2c-9b56-c3c253376919 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d85df72861021       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f3e2a2df8526b       coredns-7db6d8ff4d-wqbpm
	587b5ee91e4d9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   9 minutes ago       Running             kube-proxy                0                   02a5300c32d54       kube-proxy-wfr8f
	544de27dfe841       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   4f7010b89f1f0       storage-provisioner
	43f80c510edb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   2a584dad4937e       coredns-7db6d8ff4d-z8mxw
	87388e1df32b7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   9 minutes ago       Running             kube-scheduler            2                   213c981cebf8d       kube-scheduler-default-k8s-diff-port-024652
	dcc3f9ab02e73       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   9 minutes ago       Running             kube-controller-manager   2                   ae4afd327a6da       kube-controller-manager-default-k8s-diff-port-024652
	2ec7ffdb7235b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   9 minutes ago       Running             kube-apiserver            2                   95163f9834b92       kube-apiserver-default-k8s-diff-port-024652
	1b8f3542dce58       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   0eed853799d51       etcd-default-k8s-diff-port-024652
	
	
	==> coredns [43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-024652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-024652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=default-k8s-diff-port-024652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:49:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-024652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:58:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:54:43 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:54:43 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:54:43 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:54:43 +0000   Mon, 29 Jul 2024 19:49:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.100
	  Hostname:    default-k8s-diff-port-024652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ec965039dcb4ac6a46f5f8483481744
	  System UUID:                5ec96503-9dcb-4ac6-a46f-5f8483481744
	  Boot ID:                    a1fbd365-084b-4db4-88a6-674afca14f68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wqbpm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-7db6d8ff4d-z8mxw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-024652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-024652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-024652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-wfr8f                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-024652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-569cc877fc-rp2fk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m28s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s                   node-controller  Node default-k8s-diff-port-024652 event: Registered Node default-k8s-diff-port-024652 in Controller
	
	
	==> dmesg <==
	[  +0.050310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039281] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 19:44] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.500985] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.589794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.249980] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.063818] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057716] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.214001] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.130028] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.281544] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.437182] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.058597] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.103684] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.573390] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.113437] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 19:49] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.739011] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +4.455558] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.601107] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[ +14.341730] systemd-fstab-generator[4105]: Ignoring "noauto" option for root device
	[  +0.119466] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:50] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f] <==
	{"level":"info","ts":"2024-07-29T19:49:12.692997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:49:12.693252Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cfb89c251def23d6","initial-advertise-peer-urls":["https://192.168.72.100:2380"],"listen-peer-urls":["https://192.168.72.100:2380"],"advertise-client-urls":["https://192.168.72.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:49:12.693304Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:49:12.693727Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.100:2380"}
	{"level":"info","ts":"2024-07-29T19:49:12.693791Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.100:2380"}
	{"level":"info","ts":"2024-07-29T19:49:12.695398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 switched to configuration voters=(14967885044795778006)"}
	{"level":"info","ts":"2024-07-29T19:49:12.695634Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9f4804f49c08bcf7","local-member-id":"cfb89c251def23d6","added-peer-id":"cfb89c251def23d6","added-peer-peer-urls":["https://192.168.72.100:2380"]}
	{"level":"info","ts":"2024-07-29T19:49:13.667608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:13.667659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:13.667698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 received MsgPreVoteResp from cfb89c251def23d6 at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:13.667714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 received MsgVoteResp from cfb89c251def23d6 at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cfb89c251def23d6 elected leader cfb89c251def23d6 at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.671599Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cfb89c251def23d6","local-member-attributes":"{Name:default-k8s-diff-port-024652 ClientURLs:[https://192.168.72.100:2379]}","request-path":"/0/members/cfb89c251def23d6/attributes","cluster-id":"9f4804f49c08bcf7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:49:13.671744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:13.671858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:13.672169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:13.672199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:13.675932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:49:13.694704Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f4804f49c08bcf7","local-member-id":"cfb89c251def23d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701415Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.729447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.100:2379"}
	
	
	==> kernel <==
	 19:58:39 up 14 min,  0 users,  load average: 0.39, 0.26, 0.15
	Linux default-k8s-diff-port-024652 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e] <==
	I0729 19:52:33.541885       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:54:15.331034       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:54:15.331342       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 19:54:16.332423       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:54:16.332493       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:54:16.332502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:54:16.332626       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:54:16.332675       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:54:16.333854       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:55:16.333387       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:55:16.333449       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:55:16.333459       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:55:16.334646       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:55:16.334727       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:55:16.334772       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:57:16.334401       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:57:16.334748       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 19:57:16.334780       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:57:16.334870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 19:57:16.334956       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 19:57:16.336852       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481] <==
	I0729 19:53:02.682022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="63.121µs"
	E0729 19:53:30.911487       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:53:31.464656       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:54:00.917587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:54:01.473631       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:54:30.924595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:54:31.481811       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:55:00.931310       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:55:01.491132       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:55:24.675939       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="238.944µs"
	E0729 19:55:30.936788       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:55:31.499183       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:55:37.676457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="111.441µs"
	E0729 19:56:00.942807       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:56:01.508033       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:56:30.949209       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:56:31.518217       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:00.954258       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:57:01.525653       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:30.959044       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:57:31.533365       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:58:00.965108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:58:01.541779       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:58:30.970192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 19:58:31.549907       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac] <==
	I0729 19:49:33.763181       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:49:33.798031       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.100"]
	I0729 19:49:33.912640       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:49:33.913836       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:49:33.913897       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:49:33.928784       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:49:33.929329       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:49:33.929642       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:49:33.930843       1 config.go:192] "Starting service config controller"
	I0729 19:49:33.931032       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:49:33.931109       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:49:33.931565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:49:33.944858       1 config.go:319] "Starting node config controller"
	I0729 19:49:33.944926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:49:34.031992       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:49:34.037343       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:49:34.045481       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015] <==
	W0729 19:49:15.345719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:49:15.345792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:49:16.187222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.187333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.224696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:49:16.224743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:49:16.286658       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:49:16.286884       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:49:16.364689       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:49:16.364790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:49:16.373739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.373972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.417119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:49:16.417234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:49:16.422472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:49:16.422553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:49:16.498648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:49:16.498697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 19:49:16.521210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.521260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.536464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:49:16.536508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:49:16.614734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:49:16.614808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0729 19:49:18.937975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:56:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:56:17.676246    3909 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:56:17 default-k8s-diff-port-024652 kubelet[3909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:56:17 default-k8s-diff-port-024652 kubelet[3909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:56:17 default-k8s-diff-port-024652 kubelet[3909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:56:17 default-k8s-diff-port-024652 kubelet[3909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:56:27 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:56:27.660798    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:56:40 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:56:40.659272    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:56:53 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:56:53.659607    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:57:04 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:57:04.659300    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:57:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:57:17.675162    3909 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:57:17 default-k8s-diff-port-024652 kubelet[3909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:57:17 default-k8s-diff-port-024652 kubelet[3909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:57:17 default-k8s-diff-port-024652 kubelet[3909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:57:17 default-k8s-diff-port-024652 kubelet[3909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:57:19 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:57:19.660175    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:57:34 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:57:34.660018    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:57:48 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:57:48.659822    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:58:03 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:58:03.660369    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:58:17.664464    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:58:17.680056    3909 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:58:17 default-k8s-diff-port-024652 kubelet[3909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:58:32 default-k8s-diff-port-024652 kubelet[3909]: E0729 19:58:32.659842    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	
	
	==> storage-provisioner [544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566] <==
	I0729 19:49:33.626112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:49:33.647034       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:49:33.648086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:49:33.664789       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:49:33.664940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb!
	I0729 19:49:33.679596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc68dd89-aa3d-4569-94fa-81c1711986d7", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb became leader
	I0729 19:49:33.765700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rp2fk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk: exit status 1 (62.103421ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rp2fk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0729 19:50:34.134621 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 19:50:46.437768 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:51:56.610548 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
E0729 19:52:35.028589 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843792 -n no-preload-843792
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-07-29 19:59:18.26131092 +0000 UTC m=+6130.496154321
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-843792 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-843792 logs -n 25: (2.096607984s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.718581222Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283159718540736,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23462669-e9fb-4832-893c-affa19a268c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.719485811Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1e18637-820a-4de6-aaca-937e8fa8c5a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.719560845Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1e18637-820a-4de6-aaca-937e8fa8c5a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.719796936Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1e18637-820a-4de6-aaca-937e8fa8c5a8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.763279136Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e3392536-f002-4c2a-a113-08b916a5c35d name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.763419298Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e3392536-f002-4c2a-a113-08b916a5c35d name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.764313062Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87bae68c-278d-40e7-a1f6-4bd62d36e81c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.764681171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283159764660421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87bae68c-278d-40e7-a1f6-4bd62d36e81c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.765169056Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85057c17-7035-406e-83f0-e8f315c3ffd9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.765248979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85057c17-7035-406e-83f0-e8f315c3ffd9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.765447357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85057c17-7035-406e-83f0-e8f315c3ffd9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.808486139Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53faaf1f-c487-4655-a377-6e972c1336ec name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.808596699Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53faaf1f-c487-4655-a377-6e972c1336ec name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.809452569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d6b7156-7090-402d-a052-e589dfd317b5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.809866856Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283159809842283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d6b7156-7090-402d-a052-e589dfd317b5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.810363988Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3cb3c05-3ec7-4c78-b380-b43327ea1066 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.810444080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3cb3c05-3ec7-4c78-b380-b43327ea1066 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.810668459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3cb3c05-3ec7-4c78-b380-b43327ea1066 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.844088692Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b3ff8b6-f4dc-4f1b-9fe6-9baca65dbe8d name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.844178235Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b3ff8b6-f4dc-4f1b-9fe6-9baca65dbe8d name=/runtime.v1.RuntimeService/Version
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.845074837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a323d784-43b7-4df8-9f42-140881daa880 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.845594616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283159845571981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a323d784-43b7-4df8-9f42-140881daa880 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.846501492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f431389-d4da-4304-91f3-c1acdee02706 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.846552028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f431389-d4da-4304-91f3-c1acdee02706 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 19:59:19 no-preload-843792 crio[719]: time="2024-07-29 19:59:19.846733504Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f431389-d4da-4304-91f3-c1acdee02706 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	772f7ef98746f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b3ab63ee2ceea       storage-provisioner
	4ba81073ec159       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   9 minutes ago       Running             kube-proxy                0                   a1bc8706f98e0       kube-proxy-8hbrf
	1d437b9d8891a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9da01f8ff1e9c       coredns-5cfdc65f69-ck5zf
	6181b7c2844e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   f797c6b1fabcd       coredns-5cfdc65f69-bk2nx
	b0921f30a2e42       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   9 minutes ago       Running             kube-scheduler            2                   392ed8effcf65       kube-scheduler-no-preload-843792
	80c048960842d       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   9 minutes ago       Running             kube-apiserver            2                   3e09c1e3540af       kube-apiserver-no-preload-843792
	44953b90e4fb7       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   9 minutes ago       Running             etcd                      2                   0f3fdb075b25a       etcd-no-preload-843792
	b23b493276c6a       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   9 minutes ago       Running             kube-controller-manager   2                   0d973e41faa6b       kube-controller-manager-no-preload-843792
	f9da00e1c3c33       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Exited              kube-apiserver            1                   139bad8d2bd15       kube-apiserver-no-preload-843792
	
	
	==> coredns [1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-843792
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-843792
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=no-preload-843792
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:50:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-843792
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 19:59:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 19:55:20 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 19:55:20 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 19:55:20 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 19:55:20 +0000   Mon, 29 Jul 2024 19:50:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.248
	  Hostname:    no-preload-843792
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 731d84f987e647f0962ad04553af0b38
	  System UUID:                731d84f9-87e6-47f0-962a-d04553af0b38
	  Boot ID:                    cfb8dee4-3bb7-481c-9b07-74f42c91c88e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-bk2nx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 coredns-5cfdc65f69-ck5zf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m12s
	  kube-system                 etcd-no-preload-843792                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-843792             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-843792    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-proxy-8hbrf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	  kube-system                 kube-scheduler-no-preload-843792             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-78fcd8795b-fzt2k              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m11s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node no-preload-843792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node no-preload-843792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node no-preload-843792 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m17s                  kubelet          Node no-preload-843792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s                  kubelet          Node no-preload-843792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s                  kubelet          Node no-preload-843792 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-843792 event: Registered Node no-preload-843792 in Controller
	
	
	==> dmesg <==
	[  +0.061282] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.197307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.480976] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626482] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.479075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.066913] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051707] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.188053] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.136120] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.295274] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Jul29 19:45] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.059576] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.157803] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +3.435967] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.354712] kauditd_printk_skb: 53 callbacks suppressed
	[  +8.740976] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 19:49] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.073038] kauditd_printk_skb: 8 callbacks suppressed
	[Jul29 19:50] systemd-fstab-generator[3335]: Ignoring "noauto" option for root device
	[  +0.096485] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.851222] systemd-fstab-generator[3458]: Ignoring "noauto" option for root device
	[  +0.579935] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.523859] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c] <==
	{"level":"info","ts":"2024-07-29T19:49:58.165021Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T19:49:58.165358Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b63441e4e9d891b","initial-advertise-peer-urls":["https://192.168.50.248:2380"],"listen-peer-urls":["https://192.168.50.248:2380"],"advertise-client-urls":["https://192.168.50.248:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.248:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T19:49:58.165162Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.248:2380"}
	{"level":"info","ts":"2024-07-29T19:49:58.167612Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:49:58.169011Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.248:2380"}
	{"level":"info","ts":"2024-07-29T19:49:58.725003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b received MsgPreVoteResp from b63441e4e9d891b at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.726974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b received MsgVoteResp from b63441e4e9d891b at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.727098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.727128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b63441e4e9d891b elected leader b63441e4e9d891b at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.737391Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.740622Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b63441e4e9d891b","local-member-attributes":"{Name:no-preload-843792 ClientURLs:[https://192.168.50.248:2379]}","request-path":"/0/members/b63441e4e9d891b/attributes","cluster-id":"cc455d5c8c0bfc1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:49:58.740813Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:58.741709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:58.744685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:49:58.750756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.248:2379"}
	{"level":"info","ts":"2024-07-29T19:49:58.754164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:58.75429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:58.75541Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:49:58.763587Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:49:58.76757Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cc455d5c8c0bfc1b","local-member-id":"b63441e4e9d891b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.772095Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.772214Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:59:20 up 14 min,  0 users,  load average: 0.23, 0.29, 0.20
	Linux no-preload-843792 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 19:55:01.385818       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:55:01.385987       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 19:55:01.386964       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:55:01.388231       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:56:01.388236       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:56:01.388422       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 19:56:01.389588       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:56:01.389710       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:56:01.389742       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 19:56:01.391042       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 19:58:01.390841       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:58:01.391083       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 19:58:01.391160       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 19:58:01.391192       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 19:58:01.392235       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 19:58:01.392292       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf] <==
	W0729 19:49:51.533139       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.547795       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.599225       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.620998       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.693447       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.797227       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.816144       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.850594       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.895356       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.895487       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.908098       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.911673       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.922177       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.926678       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.930291       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.967525       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.035743       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.069236       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.109784       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.131731       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.197099       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.231149       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.609783       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.669133       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:53.222596       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e] <==
	E0729 19:54:08.279207       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:54:08.341614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:54:38.285078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:54:38.348690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:55:08.291775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:55:08.357521       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:55:20.253576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-843792"
	E0729 19:55:38.303112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:55:38.365649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:56:05.516152       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="441.46µs"
	E0729 19:56:08.311484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:56:08.374245       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 19:56:18.512419       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="171.116µs"
	E0729 19:56:38.318168       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:56:38.382450       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:08.325465       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:57:08.392165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:57:38.332507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:57:38.399355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:58:08.340481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:58:08.408718       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:58:38.348342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:58:38.417256       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 19:59:08.355796       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:59:08.427046       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:50:10.239727       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:50:10.271001       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.248"]
	E0729 19:50:10.271088       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:50:10.327478       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:50:10.327628       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:50:10.327682       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:50:10.332283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:50:10.332599       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:50:10.332628       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:50:10.335645       1 config.go:197] "Starting service config controller"
	I0729 19:50:10.335677       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:50:10.335703       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:50:10.335708       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:50:10.339385       1 config.go:326] "Starting node config controller"
	I0729 19:50:10.339451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:50:10.436697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:50:10.436834       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:50:10.439581       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de] <==
	W0729 19:50:01.374603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.374657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.390184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:50:01.390245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.497150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.497320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.516710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:50:01.516798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.534178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.534305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.536269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:50:01.536365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.575516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.575776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.593553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:50:01.593612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.673114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:50:01.673213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.674040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:50:01.674085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.744186       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:50:01.744294       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 19:50:01.866802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:50:01.866955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0729 19:50:04.311427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 19:57:03 no-preload-843792 kubelet[3342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:57:03 no-preload-843792 kubelet[3342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:57:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:57:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:57:07 no-preload-843792 kubelet[3342]: E0729 19:57:07.495206    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:57:20 no-preload-843792 kubelet[3342]: E0729 19:57:20.495173    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:57:35 no-preload-843792 kubelet[3342]: E0729 19:57:35.494358    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:57:50 no-preload-843792 kubelet[3342]: E0729 19:57:50.495246    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:58:01 no-preload-843792 kubelet[3342]: E0729 19:58:01.497558    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:58:03 no-preload-843792 kubelet[3342]: E0729 19:58:03.517148    3342 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:58:03 no-preload-843792 kubelet[3342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:58:03 no-preload-843792 kubelet[3342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:58:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:58:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:58:14 no-preload-843792 kubelet[3342]: E0729 19:58:14.494991    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:58:26 no-preload-843792 kubelet[3342]: E0729 19:58:26.494455    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:58:39 no-preload-843792 kubelet[3342]: E0729 19:58:39.495111    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:58:52 no-preload-843792 kubelet[3342]: E0729 19:58:52.494480    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:59:03 no-preload-843792 kubelet[3342]: E0729 19:59:03.516431    3342 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 19:59:03 no-preload-843792 kubelet[3342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 19:59:03 no-preload-843792 kubelet[3342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 19:59:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 19:59:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 19:59:07 no-preload-843792 kubelet[3342]: E0729 19:59:07.497082    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 19:59:20 no-preload-843792 kubelet[3342]: E0729 19:59:20.500200    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	
	
	==> storage-provisioner [772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972] <==
	I0729 19:50:10.289851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:50:10.313626       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:50:10.314679       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:50:10.331819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:50:10.333966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f94761bd-7a88-4939-8065-5bbf4aab4fd1", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075 became leader
	I0729 19:50:10.334245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075!
	I0729 19:50:10.443107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843792 -n no-preload-843792
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-843792 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-fzt2k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k: exit status 1 (62.820451ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-fzt2k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:00.968863 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:16.410564 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:19.654897 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:37.184687 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:44.130931 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:53:58.073000 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:54:14.564717 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:54:39.456929 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:55:01.508849 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:55:07.177172 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:55:34.135085 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:55:37.609945 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:55:46.437902 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:56:24.553081 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:56:56.609741 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:57:09.482568 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:57:35.028117 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:58:00.969085 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:58:16.411082 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:58:44.131312 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 19:59:14.564806 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:00:01.508533 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:00:34.134703 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:00:46.437352 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:01:04.014897 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (230.572952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-021528" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (222.951405ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25: (1.576240722s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.343305849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283303343265519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eadc7ac-24a0-4518-89ec-786aef1a66a2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.344012028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdde18ab-7c97-49ce-ba7f-4abc7a85f935 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.344080512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdde18ab-7c97-49ce-ba7f-4abc7a85f935 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.344119787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fdde18ab-7c97-49ce-ba7f-4abc7a85f935 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.379434411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fcef1b0-0c5a-4cba-be66-89760c448784 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.379534948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fcef1b0-0c5a-4cba-be66-89760c448784 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.380834203Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=235ec948-bad7-4836-8be4-b43270ec60d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.381252528Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283303381224478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=235ec948-bad7-4836-8be4-b43270ec60d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.381699660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70ad0b39-7053-43ca-b238-640d9643a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.381815983Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70ad0b39-7053-43ca-b238-640d9643a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.381848406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=70ad0b39-7053-43ca-b238-640d9643a73c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.416409448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba7f2c3f-e7e0-4968-a7fa-92cced430682 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.416507426Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba7f2c3f-e7e0-4968-a7fa-92cced430682 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.417611447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0bf7915-b897-4a70-882d-7c86cc88264d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.418059696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283303418036514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0bf7915-b897-4a70-882d-7c86cc88264d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.418641459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2349f7d1-4ae7-4378-9584-1317dd368089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.418699240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2349f7d1-4ae7-4378-9584-1317dd368089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.418732567Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2349f7d1-4ae7-4378-9584-1317dd368089 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.457291828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10ec64d5-99b7-45de-9ec1-1a4720aa6cad name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.457384753Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10ec64d5-99b7-45de-9ec1-1a4720aa6cad name=/runtime.v1.RuntimeService/Version
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.458956323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=414d9e6a-1ea9-49a5-a158-26c2225457b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.459327905Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283303459295155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=414d9e6a-1ea9-49a5-a158-26c2225457b1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.460137424Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a11521a-c72d-44e8-96ab-03aa03dc2940 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.460195641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a11521a-c72d-44e8-96ab-03aa03dc2940 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:01:43 old-k8s-version-021528 crio[648]: time="2024-07-29 20:01:43.460232342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9a11521a-c72d-44e8-96ab-03aa03dc2940 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 19:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042985] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.117270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.594595] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.059829] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057895] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.197592] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.124559] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.248534] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.328570] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.064370] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.920147] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +12.715960] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:48] systemd-fstab-generator[5089]: Ignoring "noauto" option for root device
	[Jul29 19:50] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.071408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:01:43 up 17 min,  0 users,  load average: 0.03, 0.05, 0.04
	Linux old-k8s-version-021528 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b7d1e0, 0xc000b73ea0)
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: goroutine 164 [chan receive]:
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc00070b9e0)
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: goroutine 165 [select]:
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00097def0, 0x4f0ac20, 0xc0009641e0, 0x1, 0xc00009e0c0)
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000264460, 0xc00009e0c0)
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b7d210, 0xc000b73f60)
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jul 29 20:01:43 old-k8s-version-021528 kubelet[6545]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jul 29 20:01:43 old-k8s-version-021528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 20:01:43 old-k8s-version-021528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (231.475953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-021528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-358053 -n embed-certs-358053
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 20:05:37.555510268 +0000 UTC m=+6509.790353669
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-358053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-358053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.167µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-358053 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-358053 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-358053 logs -n 25: (1.121260105s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:04 UTC |
	| start   | -p newest-cni-584186 --memory=2200 --alsologtostderr   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:04 UTC |
	| addons  | enable metrics-server -p newest-cni-584186             | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-584186                  | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-584186 --memory=2200 --alsologtostderr   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:05:20
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:05:20.623953 1127876 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:05:20.624238 1127876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:05:20.624249 1127876 out.go:304] Setting ErrFile to fd 2...
	I0729 20:05:20.624253 1127876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:05:20.624492 1127876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 20:05:20.625081 1127876 out.go:298] Setting JSON to false
	I0729 20:05:20.626117 1127876 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13673,"bootTime":1722269848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:05:20.626181 1127876 start.go:139] virtualization: kvm guest
	I0729 20:05:20.628286 1127876 out.go:177] * [newest-cni-584186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:05:20.629369 1127876 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 20:05:20.629404 1127876 notify.go:220] Checking for updates...
	I0729 20:05:20.631415 1127876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:05:20.632463 1127876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 20:05:20.633417 1127876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 20:05:20.634330 1127876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:05:20.635298 1127876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:05:20.636833 1127876 config.go:182] Loaded profile config "newest-cni-584186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 20:05:20.637460 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.637537 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.652799 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0729 20:05:20.653166 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.653708 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.653731 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.654094 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.654309 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.654568 1127876 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:05:20.654910 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.654958 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.670607 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0729 20:05:20.671022 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.671507 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.671529 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.671829 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.672023 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.707619 1127876 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:05:20.708669 1127876 start.go:297] selected driver: kvm2
	I0729 20:05:20.708683 1127876 start.go:901] validating driver "kvm2" against &{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:05:20.708857 1127876 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:05:20.709817 1127876 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:05:20.709923 1127876 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:05:20.724888 1127876 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:05:20.725300 1127876 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 20:05:20.725380 1127876 cni.go:84] Creating CNI manager for ""
	I0729 20:05:20.725396 1127876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:05:20.725458 1127876 start.go:340] cluster config:
	{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:05:20.725616 1127876 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:05:20.727340 1127876 out.go:177] * Starting "newest-cni-584186" primary control-plane node in "newest-cni-584186" cluster
	I0729 20:05:20.728432 1127876 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 20:05:20.728469 1127876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 20:05:20.728484 1127876 cache.go:56] Caching tarball of preloaded images
	I0729 20:05:20.728581 1127876 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:05:20.728594 1127876 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 20:05:20.728692 1127876 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/config.json ...
	I0729 20:05:20.728876 1127876 start.go:360] acquireMachinesLock for newest-cni-584186: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:05:20.728928 1127876 start.go:364] duration metric: took 31.787µs to acquireMachinesLock for "newest-cni-584186"
	I0729 20:05:20.728947 1127876 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:05:20.728957 1127876 fix.go:54] fixHost starting: 
	I0729 20:05:20.729220 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.729260 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.743619 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I0729 20:05:20.744195 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.744730 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.744758 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.745090 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.745315 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.745476 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:20.747127 1127876 fix.go:112] recreateIfNeeded on newest-cni-584186: state=Stopped err=<nil>
	I0729 20:05:20.747159 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	W0729 20:05:20.747348 1127876 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:05:20.749000 1127876 out.go:177] * Restarting existing kvm2 VM for "newest-cni-584186" ...
	I0729 20:05:20.750002 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Start
	I0729 20:05:20.750165 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring networks are active...
	I0729 20:05:20.751039 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring network default is active
	I0729 20:05:20.751420 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring network mk-newest-cni-584186 is active
	I0729 20:05:20.751760 1127876 main.go:141] libmachine: (newest-cni-584186) Getting domain xml...
	I0729 20:05:20.752414 1127876 main.go:141] libmachine: (newest-cni-584186) Creating domain...
	I0729 20:05:21.970284 1127876 main.go:141] libmachine: (newest-cni-584186) Waiting to get IP...
	I0729 20:05:21.971329 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:21.971749 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:21.971842 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:21.971733 1127911 retry.go:31] will retry after 282.590845ms: waiting for machine to come up
	I0729 20:05:22.256459 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:22.256932 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:22.256963 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:22.256880 1127911 retry.go:31] will retry after 313.47593ms: waiting for machine to come up
	I0729 20:05:22.572405 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:22.572861 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:22.572894 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:22.572812 1127911 retry.go:31] will retry after 473.465375ms: waiting for machine to come up
	I0729 20:05:23.047395 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:23.047811 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:23.047877 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:23.047796 1127911 retry.go:31] will retry after 578.411567ms: waiting for machine to come up
	I0729 20:05:23.627694 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:23.628228 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:23.628267 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:23.628167 1127911 retry.go:31] will retry after 477.787564ms: waiting for machine to come up
	I0729 20:05:24.107803 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:24.108240 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:24.108271 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:24.108182 1127911 retry.go:31] will retry after 837.951524ms: waiting for machine to come up
	I0729 20:05:24.948197 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:24.948673 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:24.948703 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:24.948616 1127911 retry.go:31] will retry after 934.783435ms: waiting for machine to come up
	I0729 20:05:25.885131 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:25.885639 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:25.885668 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:25.885587 1127911 retry.go:31] will retry after 1.434878685s: waiting for machine to come up
	I0729 20:05:27.322077 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:27.322554 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:27.322583 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:27.322504 1127911 retry.go:31] will retry after 1.203743247s: waiting for machine to come up
	I0729 20:05:28.527698 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:28.528258 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:28.528288 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:28.528214 1127911 retry.go:31] will retry after 1.552796062s: waiting for machine to come up
	I0729 20:05:30.083002 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:30.083461 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:30.083490 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:30.083421 1127911 retry.go:31] will retry after 1.847859847s: waiting for machine to come up
	I0729 20:05:31.933545 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:31.934029 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:31.934094 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:31.933995 1127911 retry.go:31] will retry after 3.312803809s: waiting for machine to come up
	I0729 20:05:35.247993 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:35.248485 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:35.248525 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:35.248451 1127911 retry.go:31] will retry after 4.141517222s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.130265291Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7c484501-fa8b-4d2d-b7c7-faea3b6b0891,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282551857493997,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T19:49:11.203798406Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96ec92ef0676c199661078574e367f3f28ac007799ad384e57dedbf8a951bffa,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-gpz72,Uid:cb992ca6-11f3-4826-b701-6789d3e3e9c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282551810194001,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-gpz72,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb992ca6-11f3-4826-b701-6789d3e3e9c
0,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:49:11.503809484Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&PodSandboxMetadata{Name:kube-proxy-phmxr,Uid:73020161-bb80-445c-ae4f-d1486e18a32e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282551551186986,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:49:09.732764019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rnpqh,Ui
d:fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282550850639665,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:49:10.241112871Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-62wzl,Uid:c0cf63a3-98a8-4107-8b51-3b9a39695a6c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282550846214928,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6c,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T19:49:10.216887178Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-358053,Uid:2493765d9dfce0eab5d73d69da98de00,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282530249722558,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.201:2379,kubernetes.io/config.hash: 2493765d9dfce0eab5d73d69da98de00,kubernetes.io/config.seen: 2024-07-29T19:48:49.778170042Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,
Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-358053,Uid:3b5276400f50ad207147bfd9245e9e7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282530249403765,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3b5276400f50ad207147bfd9245e9e7a,kubernetes.io/config.seen: 2024-07-29T19:48:49.778167767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-358053,Uid:41449673d5f25016910d76931724b851,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282530245992524,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.
name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41449673d5f25016910d76931724b851,kubernetes.io/config.seen: 2024-07-29T19:48:49.778168831Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-358053,Uid:977d36a2ce2b1f645445d678c5b902af,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722282530245326809,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.6
1.201:8443,kubernetes.io/config.hash: 977d36a2ce2b1f645445d678c5b902af,kubernetes.io/config.seen: 2024-07-29T19:48:49.778163726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e39c266b-2974-4489-b2d7-8732a88ec6df name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.130915709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=001086eb-b3e1-41e6-b0cf-feb23305f072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.131049529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=001086eb-b3e1-41e6-b0cf-feb23305f072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.131230867Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=001086eb-b3e1-41e6-b0cf-feb23305f072 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.159103607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf8945ae-1017-4ff2-91e8-d861df4ec65a name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.159174207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf8945ae-1017-4ff2-91e8-d861df4ec65a name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.160198846Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df40f9e4-7bc6-4613-9856-ca3cddab5b0f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.160670949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283538160648443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df40f9e4-7bc6-4613-9856-ca3cddab5b0f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.161248414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8c84e81-9c5f-4bc4-bc6b-6fc993c4251c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.161375786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8c84e81-9c5f-4bc4-bc6b-6fc993c4251c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.161575595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8c84e81-9c5f-4bc4-bc6b-6fc993c4251c name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.203078874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f722602b-84c8-43db-a569-479d4b4e6832 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.203173517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f722602b-84c8-43db-a569-479d4b4e6832 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.204535650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f2d17ef-4f44-402d-af5e-b0d54d82533f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.204946499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283538204923606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f2d17ef-4f44-402d-af5e-b0d54d82533f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.205804757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd7cd93b-6096-4f17-9e56-01109ea7fc7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.205878476Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd7cd93b-6096-4f17-9e56-01109ea7fc7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.206054796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd7cd93b-6096-4f17-9e56-01109ea7fc7e name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.239573293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e46184fa-7208-4ef3-98ce-60a78fce496d name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.239679713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e46184fa-7208-4ef3-98ce-60a78fce496d name=/runtime.v1.RuntimeService/Version
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.240872336Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac8adb41-7c6e-4d6d-9ceb-f06243ecc695 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.241401882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283538241245213,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac8adb41-7c6e-4d6d-9ceb-f06243ecc695 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.241820106Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1cc274dd-e47c-4705-a449-ad044fdb0242 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.241884260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1cc274dd-e47c-4705-a449-ad044fdb0242 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:05:38 embed-certs-358053 crio[729]: time="2024-07-29 20:05:38.242109747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77,PodSandboxId:44a921ee0c2d664f1e9e95884be87a5447982a25b7a8266cc5c7ffacd694f1f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282552075821569,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c484501-fa8b-4d2d-b7c7-faea3b6b0891,},Annotations:map[string]string{io.kubernetes.container.hash: 48235422,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55,PodSandboxId:c1048e65290aafc14295729559229fa4e00f73c0d8217e3fe3152ed74a19924c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282551994882669,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-phmxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73020161-bb80-445c-ae4f-d1486e18a32e,},Annotations:map[string]string{io.kubernetes.container.hash: ebf7f36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609,PodSandboxId:ee62d1e0dc3720347c0a27e9a4d9cf9e058fa3479b27e101aea673444eb02029,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551543094536,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rnpqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea,},Annotations:map[string]string{io.kubernetes.container.hash: 842f8725,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a,PodSandboxId:1f78dc9468bafb44fe97894af39996605511981bf3804da23b64673d3288dc92,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282551454833938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-62wzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0cf63a3-98a8-4107-8b51-3b9a39695a6
c,},Annotations:map[string]string{io.kubernetes.container.hash: c9a6ded5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b,PodSandboxId:1375928e902a66a31cbca2b1c8ed2b21bbce3a356834beace6c0b992e451aaf4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722282530529688816,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41449673d5f25016910d76931724b851,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4,PodSandboxId:e4c720b5d85637c05297d94da15f125c948adf03da5d47f457a92a32e15ca2c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282530500629428,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2493765d9dfce0eab5d73d69da98de00,},Annotations:map[string]string{io.kubernetes.container.hash: 793a486f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551,PodSandboxId:3188d7c2d42501409f0d49b6d321a48578f3933ff755b770c8fa150cb99ebe1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722282530453328221,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b5276400f50ad207147bfd9245e9e7a,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06,PodSandboxId:c4ef49cafc0f8fce748c92ce00dff391468d3be84d256deba94f9eb616d271a2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722282530403249916,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-358053,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 977d36a2ce2b1f645445d678c5b902af,},Annotations:map[string]string{io.kubernetes.container.hash: 29650fbf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1cc274dd-e47c-4705-a449-ad044fdb0242 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1281c537c6df1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   44a921ee0c2d6       storage-provisioner
	9de6d84f7d47e       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   c1048e65290aa       kube-proxy-phmxr
	cce8dbbbfa9e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   ee62d1e0dc372       coredns-7db6d8ff4d-rnpqh
	b4205bd7d4850       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   1f78dc9468baf       coredns-7db6d8ff4d-62wzl
	aee4a8eb84295       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   16 minutes ago      Running             kube-scheduler            2                   1375928e902a6       kube-scheduler-embed-certs-358053
	bfa838a5a4f41       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   e4c720b5d8563       etcd-embed-certs-358053
	b3f1b2259bc6f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   16 minutes ago      Running             kube-controller-manager   2                   3188d7c2d4250       kube-controller-manager-embed-certs-358053
	556c56bb813dc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   16 minutes ago      Running             kube-apiserver            2                   c4ef49cafc0f8       kube-apiserver-embed-certs-358053
	
	
	==> coredns [b4205bd7d485010d54329826a74257b1cdd7fe4b35223a6d236086dfaa12282a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cce8dbbbfa9e7f5d2c375cc93e0ddfb4aa19a070bb36de2d1b93c9000a1b9609] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-358053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-358053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=embed-certs-358053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:48:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-358053
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:05:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:04:34 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:04:34 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:04:34 +0000   Mon, 29 Jul 2024 19:48:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:04:34 +0000   Mon, 29 Jul 2024 19:48:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.201
	  Hostname:    embed-certs-358053
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 919a77fc406c42cbb736d1f4923e4fb9
	  System UUID:                919a77fc-406c-42cb-b736-d1f4923e4fb9
	  Boot ID:                    3e28f549-6640-4789-bb10-01996f19b359
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-62wzl                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-rnpqh                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-embed-certs-358053                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-358053             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-358053    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-phmxr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-embed-certs-358053             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-gpz72               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-358053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-358053 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-358053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-358053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-358053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-358053 event: Registered Node embed-certs-358053 in Controller
	
	
	==> dmesg <==
	[  +0.050101] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.752717] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.409276] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.578997] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.963264] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.059140] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067037] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.223536] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.135166] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.307770] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.346772] systemd-fstab-generator[810]: Ignoring "noauto" option for root device
	[  +0.064090] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.787843] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[Jul29 19:44] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.788197] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 19:48] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.298091] systemd-fstab-generator[3604]: Ignoring "noauto" option for root device
	[  +4.540629] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.503562] systemd-fstab-generator[3927]: Ignoring "noauto" option for root device
	[Jul29 19:49] systemd-fstab-generator[4154]: Ignoring "noauto" option for root device
	[  +0.118311] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:50] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [bfa838a5a4f41e81f9d8cbbd5d5b931b2eb9342d201d22141ee26d00c11be9b4] <==
	{"level":"info","ts":"2024-07-29T19:48:51.268019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef received MsgPreVoteResp from f000dedbcae268ef at term 1"}
	{"level":"info","ts":"2024-07-29T19:48:51.268357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef received MsgVoteResp from f000dedbcae268ef at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f000dedbcae268ef became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.268487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f000dedbcae268ef elected leader f000dedbcae268ef at term 2"}
	{"level":"info","ts":"2024-07-29T19:48:51.272916Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f000dedbcae268ef","local-member-attributes":"{Name:embed-certs-358053 ClientURLs:[https://192.168.61.201:2379]}","request-path":"/0/members/f000dedbcae268ef/attributes","cluster-id":"334af0e9e11f35f3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:48:51.273047Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:48:51.276092Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:48:51.278366Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.293489Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:48:51.301351Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:48:51.318373Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"334af0e9e11f35f3","local-member-id":"f000dedbcae268ef","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.31873Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.319531Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:48:51.320098Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.201:2379"}
	{"level":"info","ts":"2024-07-29T19:48:51.320321Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:58:51.39755Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":717}
	{"level":"info","ts":"2024-07-29T19:58:51.407032Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":717,"took":"8.989705ms","hash":2055595350,"current-db-size-bytes":2334720,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2334720,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-29T19:58:51.407085Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2055595350,"revision":717,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T20:03:51.40553Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":961}
	{"level":"info","ts":"2024-07-29T20:03:51.409476Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":961,"took":"3.409448ms","hash":2197949126,"current-db-size-bytes":2334720,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T20:03:51.409551Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2197949126,"revision":961,"compact-revision":717}
	{"level":"info","ts":"2024-07-29T20:04:51.104522Z","caller":"traceutil/trace.go:171","msg":"trace[1976786764] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"104.995312ms","start":"2024-07-29T20:04:50.999477Z","end":"2024-07-29T20:04:51.104472Z","steps":["trace[1976786764] 'process raft request'  (duration: 104.393238ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:05:38 up 22 min,  0 users,  load average: 0.06, 0.23, 0.27
	Linux embed-certs-358053 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [556c56bb813dcb0e9fe6c39388e409948a2f82151ffd03085641374a44cecc06] <==
	I0729 19:59:54.022522       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:01:54.021039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:01:54.021507       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:01:54.021547       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:01:54.022689       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:01:54.022809       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:01:54.022840       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:03:53.026987       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:03:53.027105       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 20:03:54.027221       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:03:54.028170       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:03:54.028339       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:03:54.027444       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:03:54.028480       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:03:54.029389       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:04:54.028710       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:04:54.028886       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:04:54.028915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:04:54.029851       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:04:54.029928       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:04:54.029938       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b3f1b2259bc6f29cc226d1e45dc2f2cc4afa8db01e58b6097724a3108fa83551] <==
	I0729 20:00:09.877825       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 20:00:10.655685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="314.991µs"
	I0729 20:00:21.654893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="60.754µs"
	E0729 20:00:39.207252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:00:39.885214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:01:09.211672       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:01:09.895709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:01:39.217331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:01:39.902975       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:09.224163       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:02:09.910872       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:39.229511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:02:39.918887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:09.234070       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:03:09.927555       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:39.239422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:03:39.935435       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:09.246327       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:04:09.944053       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:39.250963       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:04:39.951693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:05:09.256168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:05:09.961625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 20:05:16.652587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="345.99µs"
	I0729 20:05:30.656750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="319.237µs"
	
	
	==> kube-proxy [9de6d84f7d47e58b1bd321cd36210fdb789f353ebbb1c496b6431f968da98f55] <==
	I0729 19:49:12.312326       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:49:12.323877       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.201"]
	I0729 19:49:12.370503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:49:12.370596       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:49:12.370632       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:49:12.373556       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:49:12.374017       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:49:12.374084       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:49:12.376186       1 config.go:192] "Starting service config controller"
	I0729 19:49:12.376545       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:49:12.376663       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:49:12.376745       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:49:12.378529       1 config.go:319] "Starting node config controller"
	I0729 19:49:12.378556       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:49:12.477502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:49:12.477562       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:49:12.478966       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aee4a8eb8429506cd6a40a23568ae6fdeb332abcf88402f02b124f8b6e53678b] <==
	E0729 19:48:53.034051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:48:53.034058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.034083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:48:53.034098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.034126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 19:48:53.034143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 19:48:53.851023       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:48:53.851072       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:48:53.885520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:53.885568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 19:48:54.034922       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:48:54.035033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:48:54.042621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:48:54.042740       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 19:48:54.057441       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:48:54.057771       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:48:54.076694       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:48:54.078011       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:48:54.080404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:48:54.080471       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:48:54.172265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:48:54.172384       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 19:48:54.178049       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 19:48:54.178096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 19:48:56.427556       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:03:05 embed-certs-358053 kubelet[3934]: E0729 20:03:05.638545    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:03:18 embed-certs-358053 kubelet[3934]: E0729 20:03:18.636867    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:03:30 embed-certs-358053 kubelet[3934]: E0729 20:03:30.637093    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:03:45 embed-certs-358053 kubelet[3934]: E0729 20:03:45.637870    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:03:55 embed-certs-358053 kubelet[3934]: E0729 20:03:55.661334    3934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:03:55 embed-certs-358053 kubelet[3934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:03:55 embed-certs-358053 kubelet[3934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:03:55 embed-certs-358053 kubelet[3934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:03:55 embed-certs-358053 kubelet[3934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:04:00 embed-certs-358053 kubelet[3934]: E0729 20:04:00.637014    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:04:12 embed-certs-358053 kubelet[3934]: E0729 20:04:12.637361    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:04:24 embed-certs-358053 kubelet[3934]: E0729 20:04:24.637266    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:04:37 embed-certs-358053 kubelet[3934]: E0729 20:04:37.636636    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:04:51 embed-certs-358053 kubelet[3934]: E0729 20:04:51.636819    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:04:55 embed-certs-358053 kubelet[3934]: E0729 20:04:55.661415    3934 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:04:55 embed-certs-358053 kubelet[3934]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:04:55 embed-certs-358053 kubelet[3934]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:04:55 embed-certs-358053 kubelet[3934]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:04:55 embed-certs-358053 kubelet[3934]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:05:04 embed-certs-358053 kubelet[3934]: E0729 20:05:04.728561    3934 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 20:05:04 embed-certs-358053 kubelet[3934]: E0729 20:05:04.728661    3934 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 20:05:04 embed-certs-358053 kubelet[3934]: E0729 20:05:04.728941    3934 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2gpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,Recur
siveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:fals
e,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-gpz72_kube-system(cb992ca6-11f3-4826-b701-6789d3e3e9c0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 20:05:04 embed-certs-358053 kubelet[3934]: E0729 20:05:04.729010    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:05:16 embed-certs-358053 kubelet[3934]: E0729 20:05:16.636736    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	Jul 29 20:05:30 embed-certs-358053 kubelet[3934]: E0729 20:05:30.637512    3934 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-gpz72" podUID="cb992ca6-11f3-4826-b701-6789d3e3e9c0"
	
	
	==> storage-provisioner [1281c537c6df1d88b22bdc206c5ab613efa97b1d395992f2f616d7745a58eb77] <==
	I0729 19:49:12.261091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:49:12.271312       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:49:12.271395       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:49:12.282494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:49:12.282655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf!
	I0729 19:49:12.283660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b8d00ce0-d8bf-4c95-9f65-334fbcbb3efa", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf became leader
	I0729 19:49:12.383511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-358053_2c9f1bf5-6151-42a9-81bc-bc1424d29abf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-358053 -n embed-certs-358053
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-358053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-gpz72
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72: exit status 1 (56.353885ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-gpz72" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-358053 describe pod metrics-server-569cc877fc-gpz72: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (456.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 20:06:15.388291775 +0000 UTC m=+6547.623135160
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-024652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.676µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-024652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-024652 logs -n 25
E0729 20:06:16.199153 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-024652 logs -n 25: (1.14102375s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:04 UTC |
	| start   | -p newest-cni-584186 --memory=2200 --alsologtostderr   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:04 UTC |
	| addons  | enable metrics-server -p newest-cni-584186             | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-584186                  | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-584186 --memory=2200 --alsologtostderr   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| delete  | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	| image   | newest-cni-584186 image list                           | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:05 UTC | 29 Jul 24 20:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:06 UTC | 29 Jul 24 20:06 UTC |
	| delete  | -p newest-cni-584186                                   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:06 UTC | 29 Jul 24 20:06 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:05:20
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:05:20.623953 1127876 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:05:20.624238 1127876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:05:20.624249 1127876 out.go:304] Setting ErrFile to fd 2...
	I0729 20:05:20.624253 1127876 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:05:20.624492 1127876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 20:05:20.625081 1127876 out.go:298] Setting JSON to false
	I0729 20:05:20.626117 1127876 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13673,"bootTime":1722269848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:05:20.626181 1127876 start.go:139] virtualization: kvm guest
	I0729 20:05:20.628286 1127876 out.go:177] * [newest-cni-584186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:05:20.629369 1127876 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 20:05:20.629404 1127876 notify.go:220] Checking for updates...
	I0729 20:05:20.631415 1127876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:05:20.632463 1127876 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 20:05:20.633417 1127876 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 20:05:20.634330 1127876 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:05:20.635298 1127876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:05:20.636833 1127876 config.go:182] Loaded profile config "newest-cni-584186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 20:05:20.637460 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.637537 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.652799 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42985
	I0729 20:05:20.653166 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.653708 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.653731 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.654094 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.654309 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.654568 1127876 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:05:20.654910 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.654958 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.670607 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0729 20:05:20.671022 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.671507 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.671529 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.671829 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.672023 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.707619 1127876 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:05:20.708669 1127876 start.go:297] selected driver: kvm2
	I0729 20:05:20.708683 1127876 start.go:901] validating driver "kvm2" against &{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system
_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:05:20.708857 1127876 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:05:20.709817 1127876 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:05:20.709923 1127876 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:05:20.724888 1127876 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:05:20.725300 1127876 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 20:05:20.725380 1127876 cni.go:84] Creating CNI manager for ""
	I0729 20:05:20.725396 1127876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:05:20.725458 1127876 start.go:340] cluster config:
	{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAd
dress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:05:20.725616 1127876 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:05:20.727340 1127876 out.go:177] * Starting "newest-cni-584186" primary control-plane node in "newest-cni-584186" cluster
	I0729 20:05:20.728432 1127876 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 20:05:20.728469 1127876 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 20:05:20.728484 1127876 cache.go:56] Caching tarball of preloaded images
	I0729 20:05:20.728581 1127876 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:05:20.728594 1127876 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 20:05:20.728692 1127876 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/config.json ...
	I0729 20:05:20.728876 1127876 start.go:360] acquireMachinesLock for newest-cni-584186: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:05:20.728928 1127876 start.go:364] duration metric: took 31.787µs to acquireMachinesLock for "newest-cni-584186"
	I0729 20:05:20.728947 1127876 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:05:20.728957 1127876 fix.go:54] fixHost starting: 
	I0729 20:05:20.729220 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:20.729260 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:20.743619 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37697
	I0729 20:05:20.744195 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:20.744730 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:20.744758 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:20.745090 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:20.745315 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:20.745476 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:20.747127 1127876 fix.go:112] recreateIfNeeded on newest-cni-584186: state=Stopped err=<nil>
	I0729 20:05:20.747159 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	W0729 20:05:20.747348 1127876 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:05:20.749000 1127876 out.go:177] * Restarting existing kvm2 VM for "newest-cni-584186" ...
	I0729 20:05:20.750002 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Start
	I0729 20:05:20.750165 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring networks are active...
	I0729 20:05:20.751039 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring network default is active
	I0729 20:05:20.751420 1127876 main.go:141] libmachine: (newest-cni-584186) Ensuring network mk-newest-cni-584186 is active
	I0729 20:05:20.751760 1127876 main.go:141] libmachine: (newest-cni-584186) Getting domain xml...
	I0729 20:05:20.752414 1127876 main.go:141] libmachine: (newest-cni-584186) Creating domain...
	I0729 20:05:21.970284 1127876 main.go:141] libmachine: (newest-cni-584186) Waiting to get IP...
	I0729 20:05:21.971329 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:21.971749 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:21.971842 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:21.971733 1127911 retry.go:31] will retry after 282.590845ms: waiting for machine to come up
	I0729 20:05:22.256459 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:22.256932 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:22.256963 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:22.256880 1127911 retry.go:31] will retry after 313.47593ms: waiting for machine to come up
	I0729 20:05:22.572405 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:22.572861 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:22.572894 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:22.572812 1127911 retry.go:31] will retry after 473.465375ms: waiting for machine to come up
	I0729 20:05:23.047395 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:23.047811 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:23.047877 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:23.047796 1127911 retry.go:31] will retry after 578.411567ms: waiting for machine to come up
	I0729 20:05:23.627694 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:23.628228 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:23.628267 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:23.628167 1127911 retry.go:31] will retry after 477.787564ms: waiting for machine to come up
	I0729 20:05:24.107803 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:24.108240 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:24.108271 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:24.108182 1127911 retry.go:31] will retry after 837.951524ms: waiting for machine to come up
	I0729 20:05:24.948197 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:24.948673 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:24.948703 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:24.948616 1127911 retry.go:31] will retry after 934.783435ms: waiting for machine to come up
	I0729 20:05:25.885131 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:25.885639 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:25.885668 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:25.885587 1127911 retry.go:31] will retry after 1.434878685s: waiting for machine to come up
	I0729 20:05:27.322077 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:27.322554 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:27.322583 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:27.322504 1127911 retry.go:31] will retry after 1.203743247s: waiting for machine to come up
	I0729 20:05:28.527698 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:28.528258 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:28.528288 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:28.528214 1127911 retry.go:31] will retry after 1.552796062s: waiting for machine to come up
	I0729 20:05:30.083002 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:30.083461 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:30.083490 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:30.083421 1127911 retry.go:31] will retry after 1.847859847s: waiting for machine to come up
	I0729 20:05:31.933545 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:31.934029 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:31.934094 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:31.933995 1127911 retry.go:31] will retry after 3.312803809s: waiting for machine to come up
	I0729 20:05:35.247993 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:35.248485 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:05:35.248525 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:05:35.248451 1127911 retry.go:31] will retry after 4.141517222s: waiting for machine to come up
	I0729 20:05:39.391860 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.392320 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has current primary IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.392368 1127876 main.go:141] libmachine: (newest-cni-584186) Found IP for machine: 192.168.39.170
	I0729 20:05:39.392391 1127876 main.go:141] libmachine: (newest-cni-584186) Reserving static IP address...
	I0729 20:05:39.392775 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "newest-cni-584186", mac: "52:54:00:60:e1:97", ip: "192.168.39.170"} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:39.392807 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | skip adding static IP to network mk-newest-cni-584186 - found existing host DHCP lease matching {name: "newest-cni-584186", mac: "52:54:00:60:e1:97", ip: "192.168.39.170"}
	I0729 20:05:39.392819 1127876 main.go:141] libmachine: (newest-cni-584186) Reserved static IP address: 192.168.39.170
	I0729 20:05:39.392836 1127876 main.go:141] libmachine: (newest-cni-584186) Waiting for SSH to be available...
	I0729 20:05:39.392848 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Getting to WaitForSSH function...
	I0729 20:05:39.394991 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.395322 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:39.395372 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.395453 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Using SSH client type: external
	I0729 20:05:39.395488 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa (-rw-------)
	I0729 20:05:39.395523 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:05:39.395550 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | About to run SSH command:
	I0729 20:05:39.395563 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | exit 0
	I0729 20:05:39.527242 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | SSH cmd err, output: <nil>: 
	I0729 20:05:39.527581 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetConfigRaw
	I0729 20:05:39.528226 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetIP
	I0729 20:05:39.530969 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.531388 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:39.531415 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.531669 1127876 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/config.json ...
	I0729 20:05:39.531881 1127876 machine.go:94] provisionDockerMachine start ...
	I0729 20:05:39.531903 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:39.532117 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:39.534453 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.534801 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:39.534832 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:39.534983 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:39.535161 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:39.535341 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:39.535534 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:39.535726 1127876 main.go:141] libmachine: Using SSH client type: native
	I0729 20:05:39.535985 1127876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0729 20:05:39.536000 1127876 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:05:39.638822 1127876 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 20:05:39.638862 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetMachineName
	I0729 20:05:39.639137 1127876 buildroot.go:166] provisioning hostname "newest-cni-584186"
	I0729 20:05:39.639165 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetMachineName
	I0729 20:05:39.639337 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:40.173675 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.174077 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.174123 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.174261 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:40.174539 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.174709 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.174870 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:40.175032 1127876 main.go:141] libmachine: Using SSH client type: native
	I0729 20:05:40.175219 1127876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0729 20:05:40.175234 1127876 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-584186 && echo "newest-cni-584186" | sudo tee /etc/hostname
	I0729 20:05:40.293093 1127876 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-584186
	
	I0729 20:05:40.293126 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:40.296274 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.296681 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.296710 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.296848 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:40.297057 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.297241 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.297370 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:40.297556 1127876 main.go:141] libmachine: Using SSH client type: native
	I0729 20:05:40.297756 1127876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0729 20:05:40.297779 1127876 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-584186' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-584186/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-584186' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:05:40.407595 1127876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:05:40.407624 1127876 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 20:05:40.407643 1127876 buildroot.go:174] setting up certificates
	I0729 20:05:40.407651 1127876 provision.go:84] configureAuth start
	I0729 20:05:40.407660 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetMachineName
	I0729 20:05:40.407939 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetIP
	I0729 20:05:40.410271 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.410636 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.410665 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.410756 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:40.413177 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.413495 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.413530 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.413664 1127876 provision.go:143] copyHostCerts
	I0729 20:05:40.413731 1127876 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 20:05:40.413749 1127876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 20:05:40.413819 1127876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 20:05:40.413957 1127876 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 20:05:40.413971 1127876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 20:05:40.414013 1127876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 20:05:40.414124 1127876 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 20:05:40.414135 1127876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 20:05:40.414167 1127876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 20:05:40.414271 1127876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.newest-cni-584186 san=[127.0.0.1 192.168.39.170 localhost minikube newest-cni-584186]
	I0729 20:05:40.619119 1127876 provision.go:177] copyRemoteCerts
	I0729 20:05:40.619181 1127876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:05:40.619224 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:40.621852 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.622182 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.622206 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.622374 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:40.622566 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.622710 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:40.622840 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:40.704646 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 20:05:40.731007 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 20:05:40.753758 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:05:40.776844 1127876 provision.go:87] duration metric: took 369.179383ms to configureAuth
	I0729 20:05:40.776868 1127876 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:05:40.777082 1127876 config.go:182] Loaded profile config "newest-cni-584186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 20:05:40.777167 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:40.780059 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.780425 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:40.780453 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:40.780656 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:40.780845 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.781015 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:40.781134 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:40.781313 1127876 main.go:141] libmachine: Using SSH client type: native
	I0729 20:05:40.781517 1127876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0729 20:05:40.781539 1127876 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:05:41.048724 1127876 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:05:41.048751 1127876 machine.go:97] duration metric: took 1.516854986s to provisionDockerMachine
	I0729 20:05:41.048763 1127876 start.go:293] postStartSetup for "newest-cni-584186" (driver="kvm2")
	I0729 20:05:41.048776 1127876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:05:41.048829 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:41.049175 1127876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:05:41.049200 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:41.051829 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.052156 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:41.052190 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.052345 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:41.052534 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:41.052671 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:41.052801 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:41.133718 1127876 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:05:41.137646 1127876 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:05:41.137677 1127876 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 20:05:41.137747 1127876 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 20:05:41.137867 1127876 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 20:05:41.137985 1127876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:05:41.146798 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 20:05:41.169499 1127876 start.go:296] duration metric: took 120.721195ms for postStartSetup
	I0729 20:05:41.169546 1127876 fix.go:56] duration metric: took 20.440587678s for fixHost
	I0729 20:05:41.169575 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:41.172638 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.172989 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:41.173019 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.173133 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:41.173333 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:41.173476 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:41.173648 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:41.173885 1127876 main.go:141] libmachine: Using SSH client type: native
	I0729 20:05:41.174053 1127876 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0729 20:05:41.174063 1127876 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:05:41.279560 1127876 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283541.248209888
	
	I0729 20:05:41.279593 1127876 fix.go:216] guest clock: 1722283541.248209888
	I0729 20:05:41.279603 1127876 fix.go:229] Guest: 2024-07-29 20:05:41.248209888 +0000 UTC Remote: 2024-07-29 20:05:41.169551743 +0000 UTC m=+20.581594119 (delta=78.658145ms)
	I0729 20:05:41.279658 1127876 fix.go:200] guest clock delta is within tolerance: 78.658145ms
	I0729 20:05:41.279665 1127876 start.go:83] releasing machines lock for "newest-cni-584186", held for 20.550727513s
	I0729 20:05:41.279691 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:41.279960 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetIP
	I0729 20:05:41.282517 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.282932 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:41.282955 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.283145 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:41.283704 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:41.283887 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:41.283979 1127876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:05:41.284032 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:41.284102 1127876 ssh_runner.go:195] Run: cat /version.json
	I0729 20:05:41.284129 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:41.286639 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.286824 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.286974 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:41.286996 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.287127 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:41.287281 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:41.287288 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:41.287311 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:41.287477 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:41.287497 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:41.287643 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:41.287659 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:41.287782 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:41.287883 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:41.364197 1127876 ssh_runner.go:195] Run: systemctl --version
	I0729 20:05:41.392216 1127876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:05:41.538464 1127876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:05:41.544500 1127876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:05:41.544575 1127876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:05:41.560636 1127876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:05:41.560659 1127876 start.go:495] detecting cgroup driver to use...
	I0729 20:05:41.560716 1127876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:05:41.576939 1127876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:05:41.591281 1127876 docker.go:217] disabling cri-docker service (if available) ...
	I0729 20:05:41.591345 1127876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:05:41.604698 1127876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:05:41.618130 1127876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:05:41.732827 1127876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:05:41.867721 1127876 docker.go:233] disabling docker service ...
	I0729 20:05:41.867809 1127876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:05:41.882605 1127876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:05:41.894878 1127876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:05:42.030355 1127876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:05:42.141554 1127876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:05:42.154974 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:05:42.172765 1127876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 20:05:42.172831 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.182727 1127876 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:05:42.182794 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.192726 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.203399 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.213415 1127876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:05:42.223446 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.233711 1127876 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.251668 1127876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:05:42.261617 1127876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:05:42.270627 1127876 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:05:42.270683 1127876 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:05:42.283559 1127876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:05:42.292652 1127876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:05:42.410478 1127876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:05:42.558097 1127876 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:05:42.558195 1127876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:05:42.563035 1127876 start.go:563] Will wait 60s for crictl version
	I0729 20:05:42.563092 1127876 ssh_runner.go:195] Run: which crictl
	I0729 20:05:42.566893 1127876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:05:42.603579 1127876 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:05:42.603653 1127876 ssh_runner.go:195] Run: crio --version
	I0729 20:05:42.631607 1127876 ssh_runner.go:195] Run: crio --version
	I0729 20:05:42.659647 1127876 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 20:05:42.660846 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetIP
	I0729 20:05:42.663622 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:42.663965 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:42.663998 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:42.664225 1127876 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:05:42.668595 1127876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:05:42.682728 1127876 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0729 20:05:42.683751 1127876 kubeadm.go:883] updating cluster {Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:05:42.683871 1127876 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 20:05:42.683926 1127876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:05:42.720587 1127876 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 20:05:42.720655 1127876 ssh_runner.go:195] Run: which lz4
	I0729 20:05:42.724980 1127876 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 20:05:42.729535 1127876 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 20:05:42.729566 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (387176433 bytes)
	I0729 20:05:44.051266 1127876 crio.go:462] duration metric: took 1.326326279s to copy over tarball
	I0729 20:05:44.051355 1127876 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 20:05:46.048039 1127876 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.996653912s)
	I0729 20:05:46.048067 1127876 crio.go:469] duration metric: took 1.996772665s to extract the tarball
	I0729 20:05:46.048074 1127876 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 20:05:46.085151 1127876 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:05:46.126742 1127876 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:05:46.126771 1127876 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:05:46.126781 1127876 kubeadm.go:934] updating node { 192.168.39.170 8443 v1.31.0-beta.0 crio true true} ...
	I0729 20:05:46.126955 1127876 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-584186 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:05:46.127050 1127876 ssh_runner.go:195] Run: crio config
	I0729 20:05:46.172442 1127876 cni.go:84] Creating CNI manager for ""
	I0729 20:05:46.172465 1127876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:05:46.172479 1127876 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0729 20:05:46.172509 1127876 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-584186 NodeName:newest-cni-584186 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] Feature
Args:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:05:46.172679 1127876 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-584186"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:05:46.172764 1127876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 20:05:46.182650 1127876 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:05:46.182722 1127876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 20:05:46.191956 1127876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0729 20:05:46.207488 1127876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 20:05:46.223205 1127876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I0729 20:05:46.239884 1127876 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I0729 20:05:46.243772 1127876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:05:46.255733 1127876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:05:46.385974 1127876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:05:46.403142 1127876 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186 for IP: 192.168.39.170
	I0729 20:05:46.403172 1127876 certs.go:194] generating shared ca certs ...
	I0729 20:05:46.403195 1127876 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:05:46.403398 1127876 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 20:05:46.403481 1127876 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 20:05:46.403496 1127876 certs.go:256] generating profile certs ...
	I0729 20:05:46.403599 1127876 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/client.key
	I0729 20:05:46.403685 1127876 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/apiserver.key.33aa0cdf
	I0729 20:05:46.403737 1127876 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/proxy-client.key
	I0729 20:05:46.403870 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 20:05:46.403909 1127876 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 20:05:46.403925 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:05:46.403959 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 20:05:46.403992 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:05:46.404053 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 20:05:46.404115 1127876 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 20:05:46.404728 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:05:46.444311 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 20:05:46.467992 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:05:46.500740 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 20:05:46.531657 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 20:05:46.558153 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:05:46.582215 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:05:46.606119 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:05:46.629662 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 20:05:46.652192 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 20:05:46.674403 1127876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:05:46.696753 1127876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:05:46.713471 1127876 ssh_runner.go:195] Run: openssl version
	I0729 20:05:46.719543 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:05:46.730559 1127876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:05:46.734901 1127876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:05:46.734946 1127876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:05:46.740565 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:05:46.750464 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 20:05:46.760580 1127876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 20:05:46.764902 1127876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 20:05:46.764952 1127876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 20:05:46.770507 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 20:05:46.781224 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 20:05:46.791426 1127876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 20:05:46.795882 1127876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 20:05:46.795934 1127876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 20:05:46.801279 1127876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:05:46.811300 1127876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:05:46.815696 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:05:46.821455 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:05:46.827027 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:05:46.832697 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:05:46.838026 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:05:46.843558 1127876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:05:46.848961 1127876 kubeadm.go:392] StartCluster: {Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartH
ostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:05:46.849048 1127876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:05:46.849092 1127876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:05:46.884731 1127876 cri.go:89] found id: ""
	I0729 20:05:46.884797 1127876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 20:05:46.895287 1127876 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 20:05:46.895311 1127876 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 20:05:46.895364 1127876 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 20:05:46.905064 1127876 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:05:46.905696 1127876 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-584186" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 20:05:46.905966 1127876 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-584186" cluster setting kubeconfig missing "newest-cni-584186" context setting]
	I0729 20:05:46.906393 1127876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:05:46.907684 1127876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 20:05:46.917286 1127876 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.170
	I0729 20:05:46.917314 1127876 kubeadm.go:1160] stopping kube-system containers ...
	I0729 20:05:46.917328 1127876 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 20:05:46.917384 1127876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:05:46.960717 1127876 cri.go:89] found id: ""
	I0729 20:05:46.960794 1127876 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 20:05:46.977982 1127876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 20:05:46.987581 1127876 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 20:05:46.987598 1127876 kubeadm.go:157] found existing configuration files:
	
	I0729 20:05:46.987635 1127876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 20:05:46.996530 1127876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 20:05:46.996569 1127876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 20:05:47.005758 1127876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 20:05:47.014435 1127876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 20:05:47.014486 1127876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 20:05:47.023414 1127876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 20:05:47.033798 1127876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 20:05:47.033852 1127876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 20:05:47.043006 1127876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 20:05:47.051457 1127876 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 20:05:47.051501 1127876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 20:05:47.060515 1127876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 20:05:47.070555 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:47.184779 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:48.414996 1127876 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.230178127s)
	I0729 20:05:48.415037 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:48.620880 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:48.697951 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:48.782324 1127876 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:05:48.782425 1127876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:05:49.283039 1127876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:05:49.783048 1127876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:05:50.282699 1127876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:05:50.312442 1127876 api_server.go:72] duration metric: took 1.53012026s to wait for apiserver process to appear ...
	I0729 20:05:50.312475 1127876 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:05:50.312496 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:50.313024 1127876 api_server.go:269] stopped: https://192.168.39.170:8443/healthz: Get "https://192.168.39.170:8443/healthz": dial tcp 192.168.39.170:8443: connect: connection refused
	I0729 20:05:50.812905 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:53.431469 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 20:05:53.431502 1127876 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 20:05:53.431516 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:53.480864 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 20:05:53.480900 1127876 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 20:05:53.813333 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:53.837115 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 20:05:53.837173 1127876 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 20:05:54.312607 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:54.318333 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 20:05:54.318367 1127876 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 20:05:54.812712 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:54.819296 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0729 20:05:54.825713 1127876 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 20:05:54.825742 1127876 api_server.go:131] duration metric: took 4.513259636s to wait for apiserver health ...
	I0729 20:05:54.825754 1127876 cni.go:84] Creating CNI manager for ""
	I0729 20:05:54.825763 1127876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:05:54.827416 1127876 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 20:05:54.828461 1127876 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 20:05:54.839562 1127876 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 20:05:54.857562 1127876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:05:54.866473 1127876 system_pods.go:59] 8 kube-system pods found
	I0729 20:05:54.866502 1127876 system_pods.go:61] "coredns-5cfdc65f69-6cq52" [c6781167-ef5d-425d-a210-9db64e7d491e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 20:05:54.866509 1127876 system_pods.go:61] "etcd-newest-cni-584186" [b6d714cb-3cae-4e29-880a-ceb046b03878] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 20:05:54.866516 1127876 system_pods.go:61] "kube-apiserver-newest-cni-584186" [bfd3be81-d2fd-41b6-ac95-4f3bee0dfafe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 20:05:54.866521 1127876 system_pods.go:61] "kube-controller-manager-newest-cni-584186" [4f049807-a7d9-4658-87c2-9f699035963e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 20:05:54.866526 1127876 system_pods.go:61] "kube-proxy-4jkpj" [3f4c4e71-633a-469e-8ae3-22353daa4958] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 20:05:54.866531 1127876 system_pods.go:61] "kube-scheduler-newest-cni-584186" [19f06508-b4c8-4138-bfe5-bb3b5682ce17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 20:05:54.866536 1127876 system_pods.go:61] "metrics-server-78fcd8795b-xxwn5" [79c9c5b7-a270-469a-a5c9-01c7760e2372] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 20:05:54.866542 1127876 system_pods.go:61] "storage-provisioner" [7b7526c7-757a-4349-ae25-a90319eaea0b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 20:05:54.866548 1127876 system_pods.go:74] duration metric: took 8.966544ms to wait for pod list to return data ...
	I0729 20:05:54.866559 1127876 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:05:54.870627 1127876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:05:54.870649 1127876 node_conditions.go:123] node cpu capacity is 2
	I0729 20:05:54.870661 1127876 node_conditions.go:105] duration metric: took 4.096898ms to run NodePressure ...
	I0729 20:05:54.870678 1127876 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:05:55.156530 1127876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 20:05:55.167899 1127876 ops.go:34] apiserver oom_adj: -16
	I0729 20:05:55.167928 1127876 kubeadm.go:597] duration metric: took 8.272609034s to restartPrimaryControlPlane
	I0729 20:05:55.167940 1127876 kubeadm.go:394] duration metric: took 8.31898525s to StartCluster
	I0729 20:05:55.167959 1127876 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:05:55.168047 1127876 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 20:05:55.168998 1127876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:05:55.169257 1127876 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:05:55.169313 1127876 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 20:05:55.169404 1127876 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-584186"
	I0729 20:05:55.169443 1127876 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-584186"
	I0729 20:05:55.169441 1127876 addons.go:69] Setting default-storageclass=true in profile "newest-cni-584186"
	W0729 20:05:55.169456 1127876 addons.go:243] addon storage-provisioner should already be in state true
	I0729 20:05:55.169458 1127876 addons.go:69] Setting dashboard=true in profile "newest-cni-584186"
	I0729 20:05:55.169476 1127876 config.go:182] Loaded profile config "newest-cni-584186": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 20:05:55.169482 1127876 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-584186"
	I0729 20:05:55.169489 1127876 host.go:66] Checking if "newest-cni-584186" exists ...
	I0729 20:05:55.169500 1127876 addons.go:234] Setting addon dashboard=true in "newest-cni-584186"
	I0729 20:05:55.169480 1127876 addons.go:69] Setting metrics-server=true in profile "newest-cni-584186"
	W0729 20:05:55.169514 1127876 addons.go:243] addon dashboard should already be in state true
	I0729 20:05:55.169536 1127876 addons.go:234] Setting addon metrics-server=true in "newest-cni-584186"
	W0729 20:05:55.169550 1127876 addons.go:243] addon metrics-server should already be in state true
	I0729 20:05:55.169551 1127876 host.go:66] Checking if "newest-cni-584186" exists ...
	I0729 20:05:55.169581 1127876 host.go:66] Checking if "newest-cni-584186" exists ...
	I0729 20:05:55.169822 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.169874 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.169914 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.169924 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.169915 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.169962 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.169983 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.169949 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.170884 1127876 out.go:177] * Verifying Kubernetes components...
	I0729 20:05:55.172150 1127876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:05:55.185881 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37301
	I0729 20:05:55.186051 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45559
	I0729 20:05:55.186212 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40185
	I0729 20:05:55.186429 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.186508 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.186614 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.186992 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.187019 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.187141 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.187164 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.187329 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.187357 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.187376 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.187531 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.187696 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.187981 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.188027 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.188145 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.188179 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.188324 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.188326 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I0729 20:05:55.188365 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.188686 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.189114 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.189134 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.189457 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.189644 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:55.200213 1127876 addons.go:234] Setting addon default-storageclass=true in "newest-cni-584186"
	W0729 20:05:55.200231 1127876 addons.go:243] addon default-storageclass should already be in state true
	I0729 20:05:55.200274 1127876 host.go:66] Checking if "newest-cni-584186" exists ...
	I0729 20:05:55.200690 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.200745 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.206649 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0729 20:05:55.207071 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.207424 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0729 20:05:55.207535 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.207552 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.207865 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.207944 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.208162 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:55.208762 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0729 20:05:55.208860 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.208876 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.209214 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.209266 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.209484 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:55.209710 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.209738 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.210094 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.210227 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:55.210507 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:55.211720 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:55.212215 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:55.212363 1127876 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 20:05:55.213071 1127876 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:05:55.213821 1127876 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0729 20:05:55.214402 1127876 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 20:05:55.214422 1127876 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 20:05:55.214441 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:55.214475 1127876 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:05:55.214490 1127876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 20:05:55.214507 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:55.215947 1127876 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0729 20:05:55.216867 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0729 20:05:55.216880 1127876 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0729 20:05:55.216894 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:55.218386 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.218458 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.218875 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:55.218902 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.219043 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:55.219081 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.219233 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:55.219386 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:55.219450 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:55.219634 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:55.219669 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:55.219800 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:55.219831 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:55.220397 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:55.221043 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.221354 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:55.221385 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.221478 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:55.221635 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:55.221730 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:55.221894 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:55.223021 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40607
	I0729 20:05:55.223407 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.223896 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.223908 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.224153 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.224570 1127876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:05:55.224598 1127876 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:05:55.239165 1127876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46623
	I0729 20:05:55.239602 1127876 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:05:55.240024 1127876 main.go:141] libmachine: Using API Version  1
	I0729 20:05:55.240039 1127876 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:05:55.240436 1127876 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:05:55.240784 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetState
	I0729 20:05:55.242399 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:05:55.243078 1127876 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 20:05:55.243090 1127876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 20:05:55.243104 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHHostname
	I0729 20:05:55.247511 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.248105 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186: {Iface:virbr1 ExpiryTime:2024-07-29 21:05:31 +0000 UTC Type:0 Mac:52:54:00:60:e1:97 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:newest-cni-584186 Clientid:01:52:54:00:60:e1:97}
	I0729 20:05:55.248148 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:05:55.248373 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHPort
	I0729 20:05:55.248528 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHKeyPath
	I0729 20:05:55.248666 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .GetSSHUsername
	I0729 20:05:55.248790 1127876 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa Username:docker}
	I0729 20:05:55.350273 1127876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:05:55.369553 1127876 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:05:55.369668 1127876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:05:55.384144 1127876 api_server.go:72] duration metric: took 214.852893ms to wait for apiserver process to appear ...
	I0729 20:05:55.384176 1127876 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:05:55.384198 1127876 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0729 20:05:55.388239 1127876 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0729 20:05:55.389153 1127876 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 20:05:55.389178 1127876 api_server.go:131] duration metric: took 4.987594ms to wait for apiserver health ...
	I0729 20:05:55.389185 1127876 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:05:55.395396 1127876 system_pods.go:59] 8 kube-system pods found
	I0729 20:05:55.395423 1127876 system_pods.go:61] "coredns-5cfdc65f69-6cq52" [c6781167-ef5d-425d-a210-9db64e7d491e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 20:05:55.395430 1127876 system_pods.go:61] "etcd-newest-cni-584186" [b6d714cb-3cae-4e29-880a-ceb046b03878] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 20:05:55.395445 1127876 system_pods.go:61] "kube-apiserver-newest-cni-584186" [bfd3be81-d2fd-41b6-ac95-4f3bee0dfafe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 20:05:55.395456 1127876 system_pods.go:61] "kube-controller-manager-newest-cni-584186" [4f049807-a7d9-4658-87c2-9f699035963e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 20:05:55.395466 1127876 system_pods.go:61] "kube-proxy-4jkpj" [3f4c4e71-633a-469e-8ae3-22353daa4958] Running
	I0729 20:05:55.395474 1127876 system_pods.go:61] "kube-scheduler-newest-cni-584186" [19f06508-b4c8-4138-bfe5-bb3b5682ce17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 20:05:55.395481 1127876 system_pods.go:61] "metrics-server-78fcd8795b-xxwn5" [79c9c5b7-a270-469a-a5c9-01c7760e2372] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 20:05:55.395488 1127876 system_pods.go:61] "storage-provisioner" [7b7526c7-757a-4349-ae25-a90319eaea0b] Running
	I0729 20:05:55.395494 1127876 system_pods.go:74] duration metric: took 6.303482ms to wait for pod list to return data ...
	I0729 20:05:55.395503 1127876 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:05:55.399523 1127876 default_sa.go:45] found service account: "default"
	I0729 20:05:55.399543 1127876 default_sa.go:55] duration metric: took 4.03491ms for default service account to be created ...
	I0729 20:05:55.399553 1127876 kubeadm.go:582] duration metric: took 230.269677ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 20:05:55.399567 1127876 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:05:55.402301 1127876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:05:55.402324 1127876 node_conditions.go:123] node cpu capacity is 2
	I0729 20:05:55.402336 1127876 node_conditions.go:105] duration metric: took 2.764344ms to run NodePressure ...
	I0729 20:05:55.402349 1127876 start.go:241] waiting for startup goroutines ...
	I0729 20:05:55.466493 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0729 20:05:55.466518 1127876 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0729 20:05:55.499945 1127876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 20:05:55.499969 1127876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 20:05:55.502182 1127876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:05:55.513346 1127876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 20:05:55.547575 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0729 20:05:55.547622 1127876 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0729 20:05:55.596509 1127876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 20:05:55.596541 1127876 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 20:05:55.621151 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0729 20:05:55.621177 1127876 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0729 20:05:55.684738 1127876 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 20:05:55.684765 1127876 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 20:05:55.716750 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0729 20:05:55.716772 1127876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0729 20:05:55.746617 1127876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 20:05:55.774960 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0729 20:05:55.774992 1127876 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0729 20:05:55.798272 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0729 20:05:55.798299 1127876 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0729 20:05:55.860428 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0729 20:05:55.860462 1127876 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0729 20:05:55.949166 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0729 20:05:55.949202 1127876 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0729 20:05:56.021370 1127876 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 20:05:56.021400 1127876 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0729 20:05:56.075380 1127876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 20:05:57.008132 1127876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.494734296s)
	I0729 20:05:57.008205 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.008216 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.008381 1127876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.506165103s)
	I0729 20:05:57.008426 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.008439 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.008561 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.008594 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.008616 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.008626 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.008765 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.008781 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.008790 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.008797 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.008795 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Closing plugin on server side
	I0729 20:05:57.008888 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.008907 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.009285 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Closing plugin on server side
	I0729 20:05:57.009321 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.009332 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.015693 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.015715 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.015968 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.015984 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.203560 1127876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.456893023s)
	I0729 20:05:57.203645 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.203663 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.204009 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.204028 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.204039 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.204059 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.204399 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Closing plugin on server side
	I0729 20:05:57.204399 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.204423 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.204433 1127876 addons.go:475] Verifying addon metrics-server=true in "newest-cni-584186"
	I0729 20:05:57.453620 1127876 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.378187608s)
	I0729 20:05:57.453678 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.453692 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.454004 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.454021 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.454028 1127876 main.go:141] libmachine: Making call to close driver server
	I0729 20:05:57.454036 1127876 main.go:141] libmachine: (newest-cni-584186) Calling .Close
	I0729 20:05:57.454371 1127876 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:05:57.454394 1127876 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:05:57.454397 1127876 main.go:141] libmachine: (newest-cni-584186) DBG | Closing plugin on server side
	I0729 20:05:57.455694 1127876 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-584186 addons enable metrics-server
	
	I0729 20:05:57.457040 1127876 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0729 20:05:57.458072 1127876 addons.go:510] duration metric: took 2.288763469s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0729 20:05:57.458113 1127876 start.go:246] waiting for cluster config update ...
	I0729 20:05:57.458128 1127876 start.go:255] writing updated cluster config ...
	I0729 20:05:57.458441 1127876 ssh_runner.go:195] Run: rm -f paused
	I0729 20:05:57.521323 1127876 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 20:05:57.522460 1127876 out.go:177] * Done! kubectl is now configured to use "newest-cni-584186" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 20:06:15 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:15.966074945Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283575966050740,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d12c8114-6b1c-43ec-8eb8-bbe5a59641e0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:15 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:15.966614636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebe3b081-eaaa-4994-96db-47a81b8cdb58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:15 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:15.966685886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebe3b081-eaaa-4994-96db-47a81b8cdb58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:15 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:15.966871552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebe3b081-eaaa-4994-96db-47a81b8cdb58 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.009395341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=687542e4-a4ff-4a0b-867b-4c74b47aa0dd name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.009486206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=687542e4-a4ff-4a0b-867b-4c74b47aa0dd name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.011201108Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db10c6fc-3865-49f2-ad06-e534523a71ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.011653685Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283576011629903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db10c6fc-3865-49f2-ad06-e534523a71ae name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.012393776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65555cc2-3f41-4769-b29e-24f06abc8ee5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.012450048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65555cc2-3f41-4769-b29e-24f06abc8ee5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.012683895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65555cc2-3f41-4769-b29e-24f06abc8ee5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.053404684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c7dedb3-f971-46ab-9fda-44210d490177 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.053500683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c7dedb3-f971-46ab-9fda-44210d490177 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.054969205Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef888883-ac01-4e55-8237-3bef2c7e7a90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.055504916Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283576055480569,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef888883-ac01-4e55-8237-3bef2c7e7a90 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.056500632Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=56e587b2-c63e-46f5-b850-ca0b65d5af74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.056623201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56e587b2-c63e-46f5-b850-ca0b65d5af74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.056801871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56e587b2-c63e-46f5-b850-ca0b65d5af74 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.090936014Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f22c5b69-85d1-4d0c-9f1c-ebd40d8eaf55 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.091014246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f22c5b69-85d1-4d0c-9f1c-ebd40d8eaf55 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.092418427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b63dea45-f0a1-4f0f-8906-77f77e08d439 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.092891718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283576092863499,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133285,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b63dea45-f0a1-4f0f-8906-77f77e08d439 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.093433115Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5be6ec5b-c49a-404e-877d-90b81704bfd3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.093503956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5be6ec5b-c49a-404e-877d-90b81704bfd3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:06:16 default-k8s-diff-port-024652 crio[729]: time="2024-07-29 20:06:16.093771448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238,PodSandboxId:f3e2a2df8526b9d80ad150567b79950e22445d9ff5137a03270d8ff19b9c5ff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573643456575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wqbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96db74e9-67ca-4065-8758-a27a14b6d3d5,},Annotations:map[string]string{io.kubernetes.container.hash: 51562b2b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566,PodSandboxId:4f7010b89f1f04ac7a1339b62625e3f27946da764fdd8ca27d13564cb9f27892,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282573419225128,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: ce612854-895f-44d4-8c33-30c3a7eff802,},Annotations:map[string]string{io.kubernetes.container.hash: 4ddbaec2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac,PodSandboxId:02a5300c32d545139c049749fc818e0561e1d3aa4e281c932e3130f18aebb1a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722282573445829530,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wfr8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 86699d3a-0843-4b82-b772-23c8f5b7c88a,},Annotations:map[string]string{io.kubernetes.container.hash: 901a1108,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0,PodSandboxId:2a584dad4937efae9054139deaa68198031d68a1a93f33b4e9527d5adab2a3da,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282573111829626,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-z8mxw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12aa4a13-f4af-4cda-b099-
5e0e44836300,},Annotations:map[string]string{io.kubernetes.container.hash: 10509c03,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481,PodSandboxId:ae4afd327a6da9207d8698af48ac71ef46074ef7016f8ec0c1754ac2aad6d86b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:172228255
2461264017,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23dfbba9e22325c54719eaf295544c1b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015,PodSandboxId:213c981cebf8d8b0fec2e4e10323a7dd280c287fd24cc90e02852c03ca7b4d04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,Cre
atedAt:1722282552489156803,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 654e6e14d2769f400fa96eb4f3a95c0e,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e,PodSandboxId:95163f9834b92a4cf84ff19d6714b6887be87f48e325ee9f4178cccc94e09353,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,Creat
edAt:1722282552389322270,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 806e72ccd89ceb8c7d450a80d54242a2,},Annotations:map[string]string{io.kubernetes.container.hash: f47a04ed,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f,PodSandboxId:0eed853799d51994d9b49218d4ea1221f08e7cbacd8adb00facd25ad675af939,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722282
552365255032,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-024652,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c20f10d633ac02b96a5da6dddf85,},Annotations:map[string]string{io.kubernetes.container.hash: 93e15cdd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5be6ec5b-c49a-404e-877d-90b81704bfd3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d85df72861021       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   f3e2a2df8526b       coredns-7db6d8ff4d-wqbpm
	587b5ee91e4d9       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   16 minutes ago      Running             kube-proxy                0                   02a5300c32d54       kube-proxy-wfr8f
	544de27dfe841       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   4f7010b89f1f0       storage-provisioner
	43f80c510edb5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   2a584dad4937e       coredns-7db6d8ff4d-z8mxw
	87388e1df32b7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   17 minutes ago      Running             kube-scheduler            2                   213c981cebf8d       kube-scheduler-default-k8s-diff-port-024652
	dcc3f9ab02e73       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   17 minutes ago      Running             kube-controller-manager   2                   ae4afd327a6da       kube-controller-manager-default-k8s-diff-port-024652
	2ec7ffdb7235b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   17 minutes ago      Running             kube-apiserver            2                   95163f9834b92       kube-apiserver-default-k8s-diff-port-024652
	1b8f3542dce58       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   17 minutes ago      Running             etcd                      2                   0eed853799d51       etcd-default-k8s-diff-port-024652
	
	
	==> coredns [43f80c510edb53dc4f840ab840eaabd2c0173459ddc9df972df6f5dd4a75b7b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [d85df72861021ccd67cf5c078798f7bd9719ff9156206d1d144e9f1541652238] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-024652
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-024652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=default-k8s-diff-port-024652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:49:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-024652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:06:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:04:56 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:04:56 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:04:56 +0000   Mon, 29 Jul 2024 19:49:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:04:56 +0000   Mon, 29 Jul 2024 19:49:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.100
	  Hostname:    default-k8s-diff-port-024652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ec965039dcb4ac6a46f5f8483481744
	  System UUID:                5ec96503-9dcb-4ac6-a46f-5f8483481744
	  Boot ID:                    a1fbd365-084b-4db4-88a6-674afca14f68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wqbpm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-z8mxw                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-024652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-024652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-024652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-wfr8f                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-024652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-rp2fk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-024652 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-024652 event: Registered Node default-k8s-diff-port-024652 in Controller
	
	
	==> dmesg <==
	[  +0.050310] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039281] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Jul29 19:44] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.500985] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.589794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.249980] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.063818] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057716] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.214001] systemd-fstab-generator[671]: Ignoring "noauto" option for root device
	[  +0.130028] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.281544] systemd-fstab-generator[712]: Ignoring "noauto" option for root device
	[  +4.437182] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.058597] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.103684] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.573390] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.113437] kauditd_printk_skb: 84 callbacks suppressed
	[Jul29 19:49] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.739011] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[  +4.455558] kauditd_printk_skb: 55 callbacks suppressed
	[  +1.601107] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[ +14.341730] systemd-fstab-generator[4105]: Ignoring "noauto" option for root device
	[  +0.119466] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 19:50] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1b8f3542dce58de6f654b924f281bc6381f1113086254ff9d9c8c90c9d084a0f] <==
	{"level":"info","ts":"2024-07-29T19:49:13.667698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 received MsgPreVoteResp from cfb89c251def23d6 at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:13.667714Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 received MsgVoteResp from cfb89c251def23d6 at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cfb89c251def23d6 became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.667743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cfb89c251def23d6 elected leader cfb89c251def23d6 at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:13.671599Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cfb89c251def23d6","local-member-attributes":"{Name:default-k8s-diff-port-024652 ClientURLs:[https://192.168.72.100:2379]}","request-path":"/0/members/cfb89c251def23d6/attributes","cluster-id":"9f4804f49c08bcf7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:49:13.671744Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:13.671858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:13.672169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:13.672199Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:13.675932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:49:13.694704Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9f4804f49c08bcf7","local-member-id":"cfb89c251def23d6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.701415Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:13.729447Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.100:2379"}
	{"level":"info","ts":"2024-07-29T19:59:13.728553Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-07-29T19:59:13.740054Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":714,"took":"11.117257ms","hash":1102647304,"current-db-size-bytes":2351104,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2351104,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-07-29T19:59:13.740251Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1102647304,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-07-29T20:04:13.736266Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":956}
	{"level":"info","ts":"2024-07-29T20:04:13.740742Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":956,"took":"4.031205ms","hash":183695164,"current-db-size-bytes":2351104,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-07-29T20:04:13.740795Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":183695164,"revision":956,"compact-revision":714}
	{"level":"warn","ts":"2024-07-29T20:04:52.836384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.890911ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2582410865700335606 > lease_revoke:<id:23d691000a2d6f9f>","response":"size:29"}
	{"level":"info","ts":"2024-07-29T20:04:52.836831Z","caller":"traceutil/trace.go:171","msg":"trace[2055535145] transaction","detail":"{read_only:false; response_revision:1233; number_of_response:1; }","duration":"253.299047ms","start":"2024-07-29T20:04:52.583492Z","end":"2024-07-29T20:04:52.836791Z","steps":["trace[2055535145] 'process raft request'  (duration: 253.183971ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:05:48.825847Z","caller":"traceutil/trace.go:171","msg":"trace[755986548] transaction","detail":"{read_only:false; response_revision:1281; number_of_response:1; }","duration":"130.606111ms","start":"2024-07-29T20:05:48.695197Z","end":"2024-07-29T20:05:48.825803Z","steps":["trace[755986548] 'process raft request'  (duration: 130.304391ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:06:16 up 22 min,  0 users,  load average: 0.22, 0.16, 0.11
	Linux default-k8s-diff-port-024652 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2ec7ffdb7235b9394901555c7b4a01d557093decd5c6f5ce7e70834a366d9f1e] <==
	I0729 20:00:16.341979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:02:16.341848       1 handler_proxy.go:93] no RequestInfo found in the context
	W0729 20:02:16.342205       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:02:16.342278       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:02:16.342406       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0729 20:02:16.342432       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:02:16.343771       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:04:15.345987       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:04:15.346291       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0729 20:04:16.346676       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:04:16.346743       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:04:16.346755       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:04:16.346834       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:04:16.346945       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:04:16.347923       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:05:16.347646       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:05:16.347765       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0729 20:05:16.347794       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:05:16.348985       1 handler_proxy.go:93] no RequestInfo found in the context
	E0729 20:05:16.349078       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 20:05:16.349104       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dcc3f9ab02e735408a8de4f6ba0fce3870dcba7510b0b9f8463dea41e2016481] <==
	I0729 20:00:43.675174       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="105.782µs"
	E0729 20:01:00.998218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:01:01.592738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:01:31.004241       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:01:31.601296       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:01.010260       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:02:01.608902       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:31.019257       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:02:31.617863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:01.024167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:03:01.626757       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:31.028975       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:03:31.635191       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:01.034490       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:04:01.643979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:31.039176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:04:31.653334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:05:01.045253       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:05:01.664795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:05:31.050980       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:05:31.673240       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 20:05:33.680239       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="434.135µs"
	I0729 20:05:44.676914       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="197.278µs"
	E0729 20:06:01.058006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0729 20:06:01.681419       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [587b5ee91e4d9de1795d7ffd93ef21fd9a8b3196be1f8eb526ada5a7c8083cac] <==
	I0729 19:49:33.763181       1 server_linux.go:69] "Using iptables proxy"
	I0729 19:49:33.798031       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.72.100"]
	I0729 19:49:33.912640       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 19:49:33.913836       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:49:33.913897       1 server_linux.go:165] "Using iptables Proxier"
	I0729 19:49:33.928784       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 19:49:33.929329       1 server.go:872] "Version info" version="v1.30.3"
	I0729 19:49:33.929642       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:49:33.930843       1 config.go:192] "Starting service config controller"
	I0729 19:49:33.931032       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:49:33.931109       1 config.go:101] "Starting endpoint slice config controller"
	I0729 19:49:33.931565       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:49:33.944858       1 config.go:319] "Starting node config controller"
	I0729 19:49:33.944926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:49:34.031992       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:49:34.037343       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:49:34.045481       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [87388e1df32b7bf04a13c912a3c2e7b8c7c944032ed1f8de11c7b26132aaa015] <==
	W0729 19:49:15.345719       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:49:15.345792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:49:16.187222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.187333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.224696       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:49:16.224743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 19:49:16.286658       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:49:16.286884       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 19:49:16.364689       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:49:16.364790       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 19:49:16.373739       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.373972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.417119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 19:49:16.417234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 19:49:16.422472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:49:16.422553       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 19:49:16.498648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:49:16.498697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 19:49:16.521210       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:49:16.521260       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 19:49:16.536464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:49:16.536508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 19:49:16.614734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:49:16.614808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0729 19:49:18.937975       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:03:44 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:03:44.659725    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:03:59 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:03:59.660555    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:04:13 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:04:13.661443    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:04:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:04:17.674320    3909 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:04:17 default-k8s-diff-port-024652 kubelet[3909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:04:17 default-k8s-diff-port-024652 kubelet[3909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:04:17 default-k8s-diff-port-024652 kubelet[3909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:04:17 default-k8s-diff-port-024652 kubelet[3909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:04:26 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:04:26.659306    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:04:39 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:04:39.660189    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:04:53 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:04:53.660122    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:05:05 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:05.659580    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:17.672679    3909 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:17.672758    3909 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:17.673060    3909 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m7k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-rp2fk_kube-system(826ffadd-1c1c-4666-8c09-f43a82262912): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:17.673104    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:17.676648    3909 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:05:17 default-k8s-diff-port-024652 kubelet[3909]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:05:33 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:33.661063    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:05:44 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:44.659845    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:05:58 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:05:58.660021    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	Jul 29 20:06:12 default-k8s-diff-port-024652 kubelet[3909]: E0729 20:06:12.659507    3909 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-rp2fk" podUID="826ffadd-1c1c-4666-8c09-f43a82262912"
	
	
	==> storage-provisioner [544de27dfe841475c85d73e7db83bb7871287b4e97412e6d52b54dffedecc566] <==
	I0729 19:49:33.626112       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:49:33.647034       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:49:33.648086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:49:33.664789       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:49:33.664940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb!
	I0729 19:49:33.679596       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc68dd89-aa3d-4569-94fa-81c1711986d7", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb became leader
	I0729 19:49:33.765700       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-024652_8bf95328-5337-4224-8df9-f8a43e81c1bb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-rp2fk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk: exit status 1 (59.413087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-rp2fk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-024652 describe pod metrics-server-569cc877fc-rp2fk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (456.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (324.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843792 -n no-preload-843792
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-07-29 20:04:44.293300474 +0000 UTC m=+6456.528143871
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-843792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-843792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.919µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-843792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-843792 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-843792 logs -n 25: (1.283030374s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC | 29 Jul 24 20:04 UTC |
	| start   | -p newest-cni-584186 --memory=2200 --alsologtostderr   | newest-cni-584186            | jenkins | v1.33.1 | 29 Jul 24 20:04 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:04:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:04:18.030280 1127135 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:04:18.030406 1127135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:04:18.030416 1127135 out.go:304] Setting ErrFile to fd 2...
	I0729 20:04:18.030423 1127135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:04:18.030608 1127135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 20:04:18.031255 1127135 out.go:298] Setting JSON to false
	I0729 20:04:18.032307 1127135 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":13610,"bootTime":1722269848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:04:18.032368 1127135 start.go:139] virtualization: kvm guest
	I0729 20:04:18.034420 1127135 out.go:177] * [newest-cni-584186] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:04:18.035596 1127135 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 20:04:18.035626 1127135 notify.go:220] Checking for updates...
	I0729 20:04:18.037741 1127135 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:04:18.039030 1127135 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 20:04:18.040179 1127135 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 20:04:18.041179 1127135 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:04:18.042282 1127135 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:04:18.043743 1127135 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:04:18.043854 1127135 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:04:18.043988 1127135 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 20:04:18.044149 1127135 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:04:18.082201 1127135 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 20:04:18.083281 1127135 start.go:297] selected driver: kvm2
	I0729 20:04:18.083298 1127135 start.go:901] validating driver "kvm2" against <nil>
	I0729 20:04:18.083322 1127135 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:04:18.084167 1127135 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:04:18.084242 1127135 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:04:18.100560 1127135 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:04:18.100617 1127135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0729 20:04:18.100673 1127135 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0729 20:04:18.100970 1127135 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0729 20:04:18.101067 1127135 cni.go:84] Creating CNI manager for ""
	I0729 20:04:18.101086 1127135 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:04:18.101097 1127135 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 20:04:18.101189 1127135 start.go:340] cluster config:
	{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:04:18.101360 1127135 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:04:18.103294 1127135 out.go:177] * Starting "newest-cni-584186" primary control-plane node in "newest-cni-584186" cluster
	I0729 20:04:18.104623 1127135 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 20:04:18.104662 1127135 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 20:04:18.104669 1127135 cache.go:56] Caching tarball of preloaded images
	I0729 20:04:18.104758 1127135 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:04:18.104768 1127135 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 20:04:18.104856 1127135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/config.json ...
	I0729 20:04:18.104874 1127135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/newest-cni-584186/config.json: {Name:mk573190fbfe1d427634958d04a69a0c6c3c05b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:04:18.105028 1127135 start.go:360] acquireMachinesLock for newest-cni-584186: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:04:18.105057 1127135 start.go:364] duration metric: took 14.961µs to acquireMachinesLock for "newest-cni-584186"
	I0729 20:04:18.105075 1127135 start.go:93] Provisioning new machine with config: &{Name:newest-cni-584186 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-584186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minik
ube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:04:18.105134 1127135 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 20:04:18.107347 1127135 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:04:18.107500 1127135 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:04:18.107553 1127135 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:04:18.122490 1127135 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0729 20:04:18.122947 1127135 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:04:18.123496 1127135 main.go:141] libmachine: Using API Version  1
	I0729 20:04:18.123525 1127135 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:04:18.124000 1127135 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:04:18.124186 1127135 main.go:141] libmachine: (newest-cni-584186) Calling .GetMachineName
	I0729 20:04:18.124371 1127135 main.go:141] libmachine: (newest-cni-584186) Calling .DriverName
	I0729 20:04:18.124542 1127135 start.go:159] libmachine.API.Create for "newest-cni-584186" (driver="kvm2")
	I0729 20:04:18.124572 1127135 client.go:168] LocalClient.Create starting
	I0729 20:04:18.124679 1127135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem
	I0729 20:04:18.124712 1127135 main.go:141] libmachine: Decoding PEM data...
	I0729 20:04:18.124735 1127135 main.go:141] libmachine: Parsing certificate...
	I0729 20:04:18.124796 1127135 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem
	I0729 20:04:18.124814 1127135 main.go:141] libmachine: Decoding PEM data...
	I0729 20:04:18.124829 1127135 main.go:141] libmachine: Parsing certificate...
	I0729 20:04:18.124845 1127135 main.go:141] libmachine: Running pre-create checks...
	I0729 20:04:18.124854 1127135 main.go:141] libmachine: (newest-cni-584186) Calling .PreCreateCheck
	I0729 20:04:18.125166 1127135 main.go:141] libmachine: (newest-cni-584186) Calling .GetConfigRaw
	I0729 20:04:18.125584 1127135 main.go:141] libmachine: Creating machine...
	I0729 20:04:18.125603 1127135 main.go:141] libmachine: (newest-cni-584186) Calling .Create
	I0729 20:04:18.125726 1127135 main.go:141] libmachine: (newest-cni-584186) Creating KVM machine...
	I0729 20:04:18.127055 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | found existing default KVM network
	I0729 20:04:18.128856 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:18.128697 1127158 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012df80}
	I0729 20:04:18.128879 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | created network xml: 
	I0729 20:04:18.128904 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | <network>
	I0729 20:04:18.128917 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   <name>mk-newest-cni-584186</name>
	I0729 20:04:18.128925 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   <dns enable='no'/>
	I0729 20:04:18.128932 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   
	I0729 20:04:18.128939 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 20:04:18.128949 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |     <dhcp>
	I0729 20:04:18.128962 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 20:04:18.128976 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |     </dhcp>
	I0729 20:04:18.128989 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   </ip>
	I0729 20:04:18.128999 1127135 main.go:141] libmachine: (newest-cni-584186) DBG |   
	I0729 20:04:18.129007 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | </network>
	I0729 20:04:18.129016 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | 
	I0729 20:04:18.134070 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | trying to create private KVM network mk-newest-cni-584186 192.168.39.0/24...
	I0729 20:04:18.204705 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | private KVM network mk-newest-cni-584186 192.168.39.0/24 created
	I0729 20:04:18.204811 1127135 main.go:141] libmachine: (newest-cni-584186) Setting up store path in /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186 ...
	I0729 20:04:18.204856 1127135 main.go:141] libmachine: (newest-cni-584186) Building disk image from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:04:18.204877 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:18.204663 1127158 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 20:04:18.204897 1127135 main.go:141] libmachine: (newest-cni-584186) Downloading /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:04:18.513304 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:18.513161 1127158 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa...
	I0729 20:04:18.819501 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:18.819372 1127158 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/newest-cni-584186.rawdisk...
	I0729 20:04:18.819530 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Writing magic tar header
	I0729 20:04:18.819544 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Writing SSH key tar header
	I0729 20:04:18.819552 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:18.819512 1127158 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186 ...
	I0729 20:04:18.819681 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186
	I0729 20:04:18.819731 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines
	I0729 20:04:18.819745 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186 (perms=drwx------)
	I0729 20:04:18.819770 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:04:18.819778 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011/.minikube (perms=drwxr-xr-x)
	I0729 20:04:18.819788 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins/minikube-integration/19312-1055011 (perms=drwxrwxr-x)
	I0729 20:04:18.819794 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:04:18.819804 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 20:04:18.819810 1127135 main.go:141] libmachine: (newest-cni-584186) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:04:18.819822 1127135 main.go:141] libmachine: (newest-cni-584186) Creating domain...
	I0729 20:04:18.819840 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1055011
	I0729 20:04:18.819856 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:04:18.819868 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:04:18.819876 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Checking permissions on dir: /home
	I0729 20:04:18.819905 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Skipping /home - not owner
	I0729 20:04:18.821204 1127135 main.go:141] libmachine: (newest-cni-584186) define libvirt domain using xml: 
	I0729 20:04:18.821225 1127135 main.go:141] libmachine: (newest-cni-584186) <domain type='kvm'>
	I0729 20:04:18.821235 1127135 main.go:141] libmachine: (newest-cni-584186)   <name>newest-cni-584186</name>
	I0729 20:04:18.821250 1127135 main.go:141] libmachine: (newest-cni-584186)   <memory unit='MiB'>2200</memory>
	I0729 20:04:18.821263 1127135 main.go:141] libmachine: (newest-cni-584186)   <vcpu>2</vcpu>
	I0729 20:04:18.821286 1127135 main.go:141] libmachine: (newest-cni-584186)   <features>
	I0729 20:04:18.821298 1127135 main.go:141] libmachine: (newest-cni-584186)     <acpi/>
	I0729 20:04:18.821306 1127135 main.go:141] libmachine: (newest-cni-584186)     <apic/>
	I0729 20:04:18.821316 1127135 main.go:141] libmachine: (newest-cni-584186)     <pae/>
	I0729 20:04:18.821331 1127135 main.go:141] libmachine: (newest-cni-584186)     
	I0729 20:04:18.821377 1127135 main.go:141] libmachine: (newest-cni-584186)   </features>
	I0729 20:04:18.821402 1127135 main.go:141] libmachine: (newest-cni-584186)   <cpu mode='host-passthrough'>
	I0729 20:04:18.821414 1127135 main.go:141] libmachine: (newest-cni-584186)   
	I0729 20:04:18.821427 1127135 main.go:141] libmachine: (newest-cni-584186)   </cpu>
	I0729 20:04:18.821454 1127135 main.go:141] libmachine: (newest-cni-584186)   <os>
	I0729 20:04:18.821472 1127135 main.go:141] libmachine: (newest-cni-584186)     <type>hvm</type>
	I0729 20:04:18.821490 1127135 main.go:141] libmachine: (newest-cni-584186)     <boot dev='cdrom'/>
	I0729 20:04:18.821507 1127135 main.go:141] libmachine: (newest-cni-584186)     <boot dev='hd'/>
	I0729 20:04:18.821528 1127135 main.go:141] libmachine: (newest-cni-584186)     <bootmenu enable='no'/>
	I0729 20:04:18.821537 1127135 main.go:141] libmachine: (newest-cni-584186)   </os>
	I0729 20:04:18.821547 1127135 main.go:141] libmachine: (newest-cni-584186)   <devices>
	I0729 20:04:18.821559 1127135 main.go:141] libmachine: (newest-cni-584186)     <disk type='file' device='cdrom'>
	I0729 20:04:18.821577 1127135 main.go:141] libmachine: (newest-cni-584186)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/boot2docker.iso'/>
	I0729 20:04:18.821592 1127135 main.go:141] libmachine: (newest-cni-584186)       <target dev='hdc' bus='scsi'/>
	I0729 20:04:18.821604 1127135 main.go:141] libmachine: (newest-cni-584186)       <readonly/>
	I0729 20:04:18.821612 1127135 main.go:141] libmachine: (newest-cni-584186)     </disk>
	I0729 20:04:18.821632 1127135 main.go:141] libmachine: (newest-cni-584186)     <disk type='file' device='disk'>
	I0729 20:04:18.821645 1127135 main.go:141] libmachine: (newest-cni-584186)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:04:18.821664 1127135 main.go:141] libmachine: (newest-cni-584186)       <source file='/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/newest-cni-584186.rawdisk'/>
	I0729 20:04:18.821680 1127135 main.go:141] libmachine: (newest-cni-584186)       <target dev='hda' bus='virtio'/>
	I0729 20:04:18.821691 1127135 main.go:141] libmachine: (newest-cni-584186)     </disk>
	I0729 20:04:18.821703 1127135 main.go:141] libmachine: (newest-cni-584186)     <interface type='network'>
	I0729 20:04:18.821714 1127135 main.go:141] libmachine: (newest-cni-584186)       <source network='mk-newest-cni-584186'/>
	I0729 20:04:18.821725 1127135 main.go:141] libmachine: (newest-cni-584186)       <model type='virtio'/>
	I0729 20:04:18.821738 1127135 main.go:141] libmachine: (newest-cni-584186)     </interface>
	I0729 20:04:18.821756 1127135 main.go:141] libmachine: (newest-cni-584186)     <interface type='network'>
	I0729 20:04:18.821770 1127135 main.go:141] libmachine: (newest-cni-584186)       <source network='default'/>
	I0729 20:04:18.821780 1127135 main.go:141] libmachine: (newest-cni-584186)       <model type='virtio'/>
	I0729 20:04:18.821789 1127135 main.go:141] libmachine: (newest-cni-584186)     </interface>
	I0729 20:04:18.821800 1127135 main.go:141] libmachine: (newest-cni-584186)     <serial type='pty'>
	I0729 20:04:18.821810 1127135 main.go:141] libmachine: (newest-cni-584186)       <target port='0'/>
	I0729 20:04:18.821821 1127135 main.go:141] libmachine: (newest-cni-584186)     </serial>
	I0729 20:04:18.821832 1127135 main.go:141] libmachine: (newest-cni-584186)     <console type='pty'>
	I0729 20:04:18.821850 1127135 main.go:141] libmachine: (newest-cni-584186)       <target type='serial' port='0'/>
	I0729 20:04:18.821865 1127135 main.go:141] libmachine: (newest-cni-584186)     </console>
	I0729 20:04:18.821882 1127135 main.go:141] libmachine: (newest-cni-584186)     <rng model='virtio'>
	I0729 20:04:18.821897 1127135 main.go:141] libmachine: (newest-cni-584186)       <backend model='random'>/dev/random</backend>
	I0729 20:04:18.821905 1127135 main.go:141] libmachine: (newest-cni-584186)     </rng>
	I0729 20:04:18.821909 1127135 main.go:141] libmachine: (newest-cni-584186)     
	I0729 20:04:18.821916 1127135 main.go:141] libmachine: (newest-cni-584186)     
	I0729 20:04:18.821921 1127135 main.go:141] libmachine: (newest-cni-584186)   </devices>
	I0729 20:04:18.821927 1127135 main.go:141] libmachine: (newest-cni-584186) </domain>
	I0729 20:04:18.821931 1127135 main.go:141] libmachine: (newest-cni-584186) 
	I0729 20:04:18.825914 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:17:65:02 in network default
	I0729 20:04:18.826437 1127135 main.go:141] libmachine: (newest-cni-584186) Ensuring networks are active...
	I0729 20:04:18.826457 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:18.827134 1127135 main.go:141] libmachine: (newest-cni-584186) Ensuring network default is active
	I0729 20:04:18.827457 1127135 main.go:141] libmachine: (newest-cni-584186) Ensuring network mk-newest-cni-584186 is active
	I0729 20:04:18.827972 1127135 main.go:141] libmachine: (newest-cni-584186) Getting domain xml...
	I0729 20:04:18.828667 1127135 main.go:141] libmachine: (newest-cni-584186) Creating domain...
	I0729 20:04:20.064361 1127135 main.go:141] libmachine: (newest-cni-584186) Waiting to get IP...
	I0729 20:04:20.065818 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:20.066344 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:20.066421 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:20.066336 1127158 retry.go:31] will retry after 275.325263ms: waiting for machine to come up
	I0729 20:04:20.343748 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:20.344326 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:20.344359 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:20.344267 1127158 retry.go:31] will retry after 296.393112ms: waiting for machine to come up
	I0729 20:04:20.642677 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:20.643175 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:20.643214 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:20.643142 1127158 retry.go:31] will retry after 374.755916ms: waiting for machine to come up
	I0729 20:04:21.019746 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:21.020199 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:21.020222 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:21.020152 1127158 retry.go:31] will retry after 441.407753ms: waiting for machine to come up
	I0729 20:04:21.462776 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:21.463334 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:21.463371 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:21.463283 1127158 retry.go:31] will retry after 653.829518ms: waiting for machine to come up
	I0729 20:04:22.118782 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:22.119296 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:22.119320 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:22.119263 1127158 retry.go:31] will retry after 939.839969ms: waiting for machine to come up
	I0729 20:04:23.061044 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:23.061585 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:23.061660 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:23.061542 1127158 retry.go:31] will retry after 956.436445ms: waiting for machine to come up
	I0729 20:04:24.019583 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:24.020096 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:24.020122 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:24.020034 1127158 retry.go:31] will retry after 1.251149155s: waiting for machine to come up
	I0729 20:04:25.272806 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:25.273240 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:25.273272 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:25.273186 1127158 retry.go:31] will retry after 1.273184074s: waiting for machine to come up
	I0729 20:04:26.547867 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:26.548232 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:26.548259 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:26.548182 1127158 retry.go:31] will retry after 1.799941911s: waiting for machine to come up
	I0729 20:04:28.350076 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:28.350560 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:28.350591 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:28.350518 1127158 retry.go:31] will retry after 2.756190053s: waiting for machine to come up
	I0729 20:04:31.109446 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:31.110005 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:31.110029 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:31.109932 1127158 retry.go:31] will retry after 2.868649312s: waiting for machine to come up
	I0729 20:04:33.980000 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:33.980481 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:33.980512 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:33.980439 1127158 retry.go:31] will retry after 3.538482955s: waiting for machine to come up
	I0729 20:04:37.520687 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:37.521143 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find current IP address of domain newest-cni-584186 in network mk-newest-cni-584186
	I0729 20:04:37.521171 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | I0729 20:04:37.521091 1127158 retry.go:31] will retry after 3.801509049s: waiting for machine to come up
	I0729 20:04:41.326708 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:41.327227 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has current primary IP address 192.168.39.170 and MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:41.327246 1127135 main.go:141] libmachine: (newest-cni-584186) Found IP for machine: 192.168.39.170
	I0729 20:04:41.327259 1127135 main.go:141] libmachine: (newest-cni-584186) Reserving static IP address...
	I0729 20:04:41.327590 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find host DHCP lease matching {name: "newest-cni-584186", mac: "52:54:00:60:e1:97", ip: "192.168.39.170"} in network mk-newest-cni-584186
	I0729 20:04:41.402825 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Getting to WaitForSSH function...
	I0729 20:04:41.402876 1127135 main.go:141] libmachine: (newest-cni-584186) Reserved static IP address: 192.168.39.170
	I0729 20:04:41.402890 1127135 main.go:141] libmachine: (newest-cni-584186) Waiting for SSH to be available...
	I0729 20:04:41.405374 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | domain newest-cni-584186 has defined MAC address 52:54:00:60:e1:97 in network mk-newest-cni-584186
	I0729 20:04:41.405587 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:60:e1:97", ip: ""} in network mk-newest-cni-584186
	I0729 20:04:41.405624 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | unable to find defined IP address of network mk-newest-cni-584186 interface with MAC address 52:54:00:60:e1:97
	I0729 20:04:41.405829 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Using SSH client type: external
	I0729 20:04:41.405866 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa (-rw-------)
	I0729 20:04:41.405893 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/newest-cni-584186/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:04:41.405922 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | About to run SSH command:
	I0729 20:04:41.405957 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | exit 0
	I0729 20:04:41.409700 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | SSH cmd err, output: exit status 255: 
	I0729 20:04:41.409719 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 20:04:41.409727 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | command : exit 0
	I0729 20:04:41.409731 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | err     : exit status 255
	I0729 20:04:41.409740 1127135 main.go:141] libmachine: (newest-cni-584186) DBG | output  : 
	
	
	==> CRI-O <==
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.914152147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283484914119280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c043fe20-52a4-49d1-b0c3-92d2b9e8a35e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.914732614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6a5f727-6e68-4b89-8d30-0e02e6553ea3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.914792872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6a5f727-6e68-4b89-8d30-0e02e6553ea3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.915023951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6a5f727-6e68-4b89-8d30-0e02e6553ea3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.959042414Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=079ea32f-7db6-4497-bc6d-309cef6d26f3 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.959125201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=079ea32f-7db6-4497-bc6d-309cef6d26f3 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.964025403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5073e1b1-47cc-4869-ba0f-a58079e9854c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.964391327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283484964369997,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5073e1b1-47cc-4869-ba0f-a58079e9854c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.965093341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a3fe018-9d4e-40e5-943f-9ee3f702d093 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.965172371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a3fe018-9d4e-40e5-943f-9ee3f702d093 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:44 no-preload-843792 crio[719]: time="2024-07-29 20:04:44.965367269Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a3fe018-9d4e-40e5-943f-9ee3f702d093 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.009121375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=efcb455c-f311-42f6-8113-abc9ae9bd70b name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.009239606Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=efcb455c-f311-42f6-8113-abc9ae9bd70b name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.011842542Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=98d29d4a-f064-48e3-b25d-5f90ff5e639d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.013199017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283485013085805,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98d29d4a-f064-48e3-b25d-5f90ff5e639d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.014319785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e2be72-f758-4fbc-a89e-3cbba477df69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.014403173Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e2be72-f758-4fbc-a89e-3cbba477df69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.014688962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62e2be72-f758-4fbc-a89e-3cbba477df69 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.063392283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=233712af-adfa-4b8c-b3bc-899c33d336b9 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.063480738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=233712af-adfa-4b8c-b3bc-899c33d336b9 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.064687978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f757dda-172b-456f-ae1e-2cbb80ea3108 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.065079092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283485065058413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100741,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f757dda-172b-456f-ae1e-2cbb80ea3108 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.065577080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b4077def-d2d5-4ed9-8348-f38cc5827b32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.065647937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b4077def-d2d5-4ed9-8348-f38cc5827b32 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:45 no-preload-843792 crio[719]: time="2024-07-29 20:04:45.065824434Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972,PodSandboxId:b3ab63ee2ceea51c76cf7f6dcdea29046098a25614c04587d8962a5de293229f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722282610130299421,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee09516d-7ef7-4d66-9acf-7fd4cde3c673,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4,PodSandboxId:a1bc8706f98e05e7203230723d8725567bbc724002ee85086cdcb016e69252dc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722282609834951055,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8hbrf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b64c7b2-cbed-4c0e-bc1b-2cef107b115c,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1,PodSandboxId:9da01f8ff1e9c9a29b650e4606779d9b1d435ec799cf144d9e930b8039287cea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609508840856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-ck5zf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c9c9b-740c-464d-85c2-a9ae44663f63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b,PodSandboxId:f797c6b1fabcdfd932d66830426deba87b2af65cad778b8d128cbe6bfc376b46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722282609410875480,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-bk2nx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 662b0879-7c15-4ec3-a6b6-e49fd9597dc
f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de,PodSandboxId:392ed8effcf659a2dcb125408b455a45336fc5acabc8a09def67149c4e3f3415,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722282597866134775,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d7e90e534d1ee4da28c8e37201501ec1,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b,PodSandboxId:3e09c1e3540af7124bd624cb9ccb03e795f432f4ca434c9556a92ea79120d3c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722282597824476874,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c,PodSandboxId:0f3fdb075b25a3355aa129142676af9c2b366e189e0fde3631f5716a7d89540e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722282597790113356,Labels:map[string]string{io.kubernetes.contai
ner.name: etcd,io.kubernetes.pod.name: etcd-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bd1c585c541aadf31eb1a7ad5f096350,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e,PodSandboxId:0d973e41faa6bf987a0848649278cb35cbdc9ddef747abbbb9aac6209d71bda9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722282597719967957,Labels:map[string]string{io.kubernetes.container.name: kube-controller
-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8970eba08cb5dcc05a1fff54b1a9d707,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf,PodSandboxId:139bad8d2bd15055ca3b3bcbdb34f6c5d92594ca20085742d7ac1e5e744b4d73,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722282311568803086,Labels:map[string]string{io.kubernetes.container.name: kube-apiser
ver,io.kubernetes.pod.name: kube-apiserver-no-preload-843792,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c861cce481c417dae420092bf20933b2,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b4077def-d2d5-4ed9-8348-f38cc5827b32 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	772f7ef98746f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   b3ab63ee2ceea       storage-provisioner
	4ba81073ec159       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   14 minutes ago      Running             kube-proxy                0                   a1bc8706f98e0       kube-proxy-8hbrf
	1d437b9d8891a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   9da01f8ff1e9c       coredns-5cfdc65f69-ck5zf
	6181b7c2844e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   f797c6b1fabcd       coredns-5cfdc65f69-bk2nx
	b0921f30a2e42       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   14 minutes ago      Running             kube-scheduler            2                   392ed8effcf65       kube-scheduler-no-preload-843792
	80c048960842d       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   14 minutes ago      Running             kube-apiserver            2                   3e09c1e3540af       kube-apiserver-no-preload-843792
	44953b90e4fb7       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   14 minutes ago      Running             etcd                      2                   0f3fdb075b25a       etcd-no-preload-843792
	b23b493276c6a       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   14 minutes ago      Running             kube-controller-manager   2                   0d973e41faa6b       kube-controller-manager-no-preload-843792
	f9da00e1c3c33       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   19 minutes ago      Exited              kube-apiserver            1                   139bad8d2bd15       kube-apiserver-no-preload-843792
	
	
	==> coredns [1d437b9d8891a8dcb3834a850c710ba90c5ae7d4802c66e3f55c23f1383db1e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [6181b7c2844e78311a5d7a07a4b2f9fceb8bfe0a05da76b1a870e281cd4dd91b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-843792
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-843792
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0
	                    minikube.k8s.io/name=no-preload-843792
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 19:50:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-843792
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:04:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:00:26 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:00:26 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:00:26 +0000   Mon, 29 Jul 2024 19:49:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:00:26 +0000   Mon, 29 Jul 2024 19:50:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.248
	  Hostname:    no-preload-843792
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 731d84f987e647f0962ad04553af0b38
	  System UUID:                731d84f9-87e6-47f0-962a-d04553af0b38
	  Boot ID:                    cfb8dee4-3bb7-481c-9b07-74f42c91c88e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-bk2nx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5cfdc65f69-ck5zf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-843792                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-843792             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-843792    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8hbrf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-843792             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-78fcd8795b-fzt2k              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-843792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-843792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-843792 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-843792 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-843792 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-843792 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-843792 event: Registered Node no-preload-843792 in Controller
	
	
	==> dmesg <==
	[  +0.061282] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.197307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.480976] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626482] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.479075] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.066913] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051707] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.188053] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.136120] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.295274] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[Jul29 19:45] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.059576] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.157803] systemd-fstab-generator[1350]: Ignoring "noauto" option for root device
	[  +3.435967] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.354712] kauditd_printk_skb: 53 callbacks suppressed
	[  +8.740976] kauditd_printk_skb: 30 callbacks suppressed
	[Jul29 19:49] systemd-fstab-generator[3012]: Ignoring "noauto" option for root device
	[  +0.073038] kauditd_printk_skb: 8 callbacks suppressed
	[Jul29 19:50] systemd-fstab-generator[3335]: Ignoring "noauto" option for root device
	[  +0.096485] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.851222] systemd-fstab-generator[3458]: Ignoring "noauto" option for root device
	[  +0.579935] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.523859] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [44953b90e4fb7accc9705cc1f9fed98ecc10f90ffbf1591894de47953c20f23c] <==
	{"level":"info","ts":"2024-07-29T19:49:58.167612Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T19:49:58.169011Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.248:2380"}
	{"level":"info","ts":"2024-07-29T19:49:58.725003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b received MsgPreVoteResp from b63441e4e9d891b at term 1"}
	{"level":"info","ts":"2024-07-29T19:49:58.725343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.726974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b received MsgVoteResp from b63441e4e9d891b at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.727098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b63441e4e9d891b became leader at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.727128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b63441e4e9d891b elected leader b63441e4e9d891b at term 2"}
	{"level":"info","ts":"2024-07-29T19:49:58.737391Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.740622Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b63441e4e9d891b","local-member-attributes":"{Name:no-preload-843792 ClientURLs:[https://192.168.50.248:2379]}","request-path":"/0/members/b63441e4e9d891b/attributes","cluster-id":"cc455d5c8c0bfc1b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T19:49:58.740813Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:58.741709Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T19:49:58.744685Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:49:58.750756Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.248:2379"}
	{"level":"info","ts":"2024-07-29T19:49:58.754164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:58.75429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T19:49:58.75541Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T19:49:58.763587Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T19:49:58.76757Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cc455d5c8c0bfc1b","local-member-id":"b63441e4e9d891b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.772095Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:49:58.772214Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T19:59:58.838129Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":726}
	{"level":"info","ts":"2024-07-29T19:59:58.848128Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":726,"took":"9.439497ms","hash":4159607753,"current-db-size-bytes":2195456,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2195456,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-07-29T19:59:58.848201Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4159607753,"revision":726,"compact-revision":-1}
	
	
	==> kernel <==
	 20:04:45 up 20 min,  0 users,  load average: 0.03, 0.17, 0.17
	Linux no-preload-843792 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [80c048960842df86f1ad88dd2498dd475f902142b8f50fe265072e88d15b6e1b] <==
	W0729 20:00:01.391596       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 20:00:01.391693       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0729 20:00:01.392688       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 20:00:01.392732       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:01:01.393200       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 20:01:01.393353       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 20:01:01.393389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 20:01:01.393406       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 20:01:01.394497       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 20:01:01.394548       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0729 20:03:01.395390       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 20:03:01.395789       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0729 20:03:01.395400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0729 20:03:01.395948       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0729 20:03:01.397108       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 20:03:01.397150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f9da00e1c3c330c41b9bd6c72c7b3746a5971698d0adc79c92379011377b4bbf] <==
	W0729 19:49:51.533139       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.547795       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.599225       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.620998       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.693447       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.797227       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.816144       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.850594       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.895356       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.895487       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.908098       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.911673       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.922177       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.926678       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.930291       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:51.967525       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.035743       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.069236       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.109784       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.131731       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.197099       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.231149       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.609783       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:52.669133       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 19:49:53.222596       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b23b493276c6a6ea9ac497cb471850b3cdbc0080e08065f384170870dab57e2e] <==
	E0729 19:59:38.362682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 19:59:38.435972       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:00:08.370004       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:00:08.445403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 20:00:26.821619       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-843792"
	E0729 20:00:38.377177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:00:38.454317       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:01:08.383836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:01:08.461637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0729 20:01:11.512993       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="166.381µs"
	I0729 20:01:23.512946       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-78fcd8795b" duration="153.34µs"
	E0729 20:01:38.390385       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:01:38.470172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:08.398143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:02:08.479614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:02:38.405755       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:02:38.488833       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:08.413944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:03:08.497822       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:03:38.420062       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:03:38.510549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:08.427617       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:04:08.520464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0729 20:04:38.435040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0729 20:04:38.528503       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4ba81073ec15900cb92fea4c791e913ad8305447171071a15b0477799633b0c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 19:50:10.239727       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 19:50:10.271001       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.248"]
	E0729 19:50:10.271088       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 19:50:10.327478       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 19:50:10.327628       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 19:50:10.327682       1 server_linux.go:170] "Using iptables Proxier"
	I0729 19:50:10.332283       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 19:50:10.332599       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 19:50:10.332628       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 19:50:10.335645       1 config.go:197] "Starting service config controller"
	I0729 19:50:10.335677       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 19:50:10.335703       1 config.go:104] "Starting endpoint slice config controller"
	I0729 19:50:10.335708       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 19:50:10.339385       1 config.go:326] "Starting node config controller"
	I0729 19:50:10.339451       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 19:50:10.436697       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 19:50:10.436834       1 shared_informer.go:320] Caches are synced for service config
	I0729 19:50:10.439581       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b0921f30a2e42fb71936409c30c53c0b6b856a24a57bcb95bea0e609961da6de] <==
	W0729 19:50:01.374603       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.374657       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.390184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 19:50:01.390245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.497150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.497320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.516710       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 19:50:01.516798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.534178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.534305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.536269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 19:50:01.536365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.575516       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 19:50:01.575776       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.593553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 19:50:01.593612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.673114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 19:50:01.673213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.674040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 19:50:01.674085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0729 19:50:01.744186       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 19:50:01.744294       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0729 19:50:01.866802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 19:50:01.866955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0729 19:50:04.311427       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:02:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:02:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:02:09 no-preload-843792 kubelet[3342]: E0729 20:02:09.494744    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:02:20 no-preload-843792 kubelet[3342]: E0729 20:02:20.495530    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:02:34 no-preload-843792 kubelet[3342]: E0729 20:02:34.494475    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:02:49 no-preload-843792 kubelet[3342]: E0729 20:02:49.495184    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:03:01 no-preload-843792 kubelet[3342]: E0729 20:03:01.494347    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:03:03 no-preload-843792 kubelet[3342]: E0729 20:03:03.517240    3342 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:03:03 no-preload-843792 kubelet[3342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:03:03 no-preload-843792 kubelet[3342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:03:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:03:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:03:12 no-preload-843792 kubelet[3342]: E0729 20:03:12.498426    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:03:26 no-preload-843792 kubelet[3342]: E0729 20:03:26.494373    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:03:39 no-preload-843792 kubelet[3342]: E0729 20:03:39.496188    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:03:54 no-preload-843792 kubelet[3342]: E0729 20:03:54.495573    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:04:03 no-preload-843792 kubelet[3342]: E0729 20:04:03.517867    3342 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:04:03 no-preload-843792 kubelet[3342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:04:03 no-preload-843792 kubelet[3342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:04:03 no-preload-843792 kubelet[3342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:04:03 no-preload-843792 kubelet[3342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:04:07 no-preload-843792 kubelet[3342]: E0729 20:04:07.500158    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:04:19 no-preload-843792 kubelet[3342]: E0729 20:04:19.495339    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:04:33 no-preload-843792 kubelet[3342]: E0729 20:04:33.496129    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	Jul 29 20:04:45 no-preload-843792 kubelet[3342]: E0729 20:04:45.495375    3342 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-78fcd8795b-fzt2k" podUID="180acfb0-ec43-4f2e-b04a-048253d4b79e"
	
	
	==> storage-provisioner [772f7ef98746fa60a8e2262a85311c9fe639aef2d98e574b9f00f587e1144972] <==
	I0729 19:50:10.289851       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 19:50:10.313626       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 19:50:10.314679       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 19:50:10.331819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 19:50:10.333966       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f94761bd-7a88-4939-8065-5bbf4aab4fd1", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075 became leader
	I0729 19:50:10.334245       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075!
	I0729 19:50:10.443107       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-843792_d334da6e-db34-4902-adca-e8b8fcb9b075!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-843792 -n no-preload-843792
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-843792 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-78fcd8795b-fzt2k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k: exit status 1 (65.810906ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-78fcd8795b-fzt2k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-843792 describe pod metrics-server-78fcd8795b-fzt2k: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (324.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (151.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:01:56.610067 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:02:35.028356 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:03:00.969169 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:03:16.410915 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
E0729 20:03:44.131378 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.65:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.65:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (233.864967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-021528" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-021528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-021528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.739µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-021528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (217.530403ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25
E0729 20:04:14.564174 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-021528 logs -n 25: (1.502549271s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-184620 sudo cat                              | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo                                  | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo find                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-184620 sudo crio                             | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-184620                                       | bridge-184620                | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	| delete  | -p                                                     | disable-driver-mounts-251895 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | disable-driver-mounts-251895                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:37 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-843792             | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-843792                                   | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-358053            | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC | 29 Jul 24 19:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:36 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-024652  | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC | 29 Jul 24 19:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:37 UTC |                     |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-843792                  | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-843792 --memory=2200                     | no-preload-843792            | jenkins | v1.33.1 | 29 Jul 24 19:38 UTC | 29 Jul 24 19:50 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-021528        | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-358053                 | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-358053                                  | embed-certs-358053           | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-024652       | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-024652 | jenkins | v1.33.1 | 29 Jul 24 19:39 UTC | 29 Jul 24 19:49 UTC |
	|         | default-k8s-diff-port-024652                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-021528             | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC | 29 Jul 24 19:40 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-021528                              | old-k8s-version-021528       | jenkins | v1.33.1 | 29 Jul 24 19:40 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:40:57
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:40:57.978681 1120970 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:40:57.978791 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.978802 1120970 out.go:304] Setting ErrFile to fd 2...
	I0729 19:40:57.978806 1120970 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:40:57.979026 1120970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:40:57.979596 1120970 out.go:298] Setting JSON to false
	I0729 19:40:57.980589 1120970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":12210,"bootTime":1722269848,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:40:57.980644 1120970 start.go:139] virtualization: kvm guest
	I0729 19:40:57.982865 1120970 out.go:177] * [old-k8s-version-021528] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:40:57.984265 1120970 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:40:57.984290 1120970 notify.go:220] Checking for updates...
	I0729 19:40:57.986747 1120970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:40:57.987926 1120970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:40:57.989034 1120970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:40:57.990155 1120970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:40:57.991151 1120970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:40:57.992788 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:40:57.993431 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:57.993513 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.008423 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0729 19:40:58.008809 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.009278 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.009298 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.009623 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.009801 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.011523 1120970 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 19:40:58.012638 1120970 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:40:58.012915 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:40:58.012949 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:40:58.027302 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0729 19:40:58.027641 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:40:58.028112 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:40:58.028144 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:40:58.028470 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:40:58.028677 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:40:58.062833 1120970 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 19:40:58.064034 1120970 start.go:297] selected driver: kvm2
	I0729 19:40:58.064048 1120970 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.064180 1120970 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:40:58.065210 1120970 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.065308 1120970 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:40:58.079987 1120970 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:40:58.080369 1120970 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:40:58.080432 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:40:58.080446 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:40:58.080487 1120970 start.go:340] cluster config:
	{Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:40:58.080598 1120970 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:40:58.082281 1120970 out.go:177] * Starting "old-k8s-version-021528" primary control-plane node in "old-k8s-version-021528" cluster
	I0729 19:40:58.083538 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:40:58.083567 1120970 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:40:58.083574 1120970 cache.go:56] Caching tarball of preloaded images
	I0729 19:40:58.083648 1120970 preload.go:172] Found /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:40:58.083657 1120970 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:40:58.083744 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:40:58.083909 1120970 start.go:360] acquireMachinesLock for old-k8s-version-021528: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:40:58.743070 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:01.815162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:07.895109 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:10.967163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:17.047104 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:20.119110 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:26.199071 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:29.271169 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:35.351112 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:38.423168 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:44.503138 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:47.575152 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:53.655149 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:41:56.727131 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:02.807132 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:05.879122 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:11.959162 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:15.031086 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:21.111136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:24.183135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:30.263164 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:33.335133 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:39.415119 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:42.487148 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:48.567136 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:51.639137 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:42:57.719135 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:00.791072 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:06.871163 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:09.943159 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:16.023117 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:19.095170 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:25.175078 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:28.247100 1119948 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.248:22: connect: no route to host
	I0729 19:43:31.250338 1120280 start.go:364] duration metric: took 4m11.087175718s to acquireMachinesLock for "embed-certs-358053"
	I0729 19:43:31.250404 1120280 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:31.250411 1120280 fix.go:54] fixHost starting: 
	I0729 19:43:31.250743 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:31.250772 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:31.266386 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I0729 19:43:31.266811 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:31.267264 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:43:31.267290 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:31.267606 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:31.267776 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:31.267930 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:43:31.269434 1120280 fix.go:112] recreateIfNeeded on embed-certs-358053: state=Stopped err=<nil>
	I0729 19:43:31.269469 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	W0729 19:43:31.269649 1120280 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:31.271498 1120280 out.go:177] * Restarting existing kvm2 VM for "embed-certs-358053" ...
	I0729 19:43:31.248030 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:31.248063 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248357 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:43:31.248385 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:43:31.248542 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:43:31.250201 1119948 machine.go:97] duration metric: took 4m37.426219796s to provisionDockerMachine
	I0729 19:43:31.250243 1119948 fix.go:56] duration metric: took 4m37.44720731s for fixHost
	I0729 19:43:31.250251 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 4m37.4472306s
	W0729 19:43:31.250275 1119948 start.go:714] error starting host: provision: host is not running
	W0729 19:43:31.250399 1119948 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 19:43:31.250411 1119948 start.go:729] Will try again in 5 seconds ...
	I0729 19:43:31.272835 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Start
	I0729 19:43:31.272957 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring networks are active...
	I0729 19:43:31.273784 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network default is active
	I0729 19:43:31.274173 1120280 main.go:141] libmachine: (embed-certs-358053) Ensuring network mk-embed-certs-358053 is active
	I0729 19:43:31.274533 1120280 main.go:141] libmachine: (embed-certs-358053) Getting domain xml...
	I0729 19:43:31.275353 1120280 main.go:141] libmachine: (embed-certs-358053) Creating domain...
	I0729 19:43:32.452915 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting to get IP...
	I0729 19:43:32.453981 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.454389 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.454483 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.454365 1121493 retry.go:31] will retry after 241.453693ms: waiting for machine to come up
	I0729 19:43:32.697915 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.698300 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.698331 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.698251 1121493 retry.go:31] will retry after 239.33532ms: waiting for machine to come up
	I0729 19:43:32.939708 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:32.940293 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:32.940318 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:32.940236 1121493 retry.go:31] will retry after 446.993297ms: waiting for machine to come up
	I0729 19:43:33.388724 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.389127 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.389158 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.389070 1121493 retry.go:31] will retry after 422.446887ms: waiting for machine to come up
	I0729 19:43:33.812596 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:33.813022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:33.813051 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:33.812969 1121493 retry.go:31] will retry after 539.971993ms: waiting for machine to come up
	I0729 19:43:34.354683 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:34.355036 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:34.355070 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:34.354984 1121493 retry.go:31] will retry after 804.005911ms: waiting for machine to come up
	I0729 19:43:36.252290 1119948 start.go:360] acquireMachinesLock for no-preload-843792: {Name:mk0d8d947666df844b5fc2c0e0eebbfed69b4140 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:43:35.161115 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:35.161468 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:35.161505 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:35.161430 1121493 retry.go:31] will retry after 1.057061094s: waiting for machine to come up
	I0729 19:43:36.220062 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:36.220425 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:36.220450 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:36.220375 1121493 retry.go:31] will retry after 1.460606435s: waiting for machine to come up
	I0729 19:43:37.683178 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:37.683636 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:37.683655 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:37.683597 1121493 retry.go:31] will retry after 1.732527981s: waiting for machine to come up
	I0729 19:43:39.418519 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:39.418954 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:39.418977 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:39.418904 1121493 retry.go:31] will retry after 2.125686576s: waiting for machine to come up
	I0729 19:43:41.547132 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:41.547733 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:41.547761 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:41.547675 1121493 retry.go:31] will retry after 2.335461887s: waiting for machine to come up
	I0729 19:43:43.884901 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:43.885306 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:43.885329 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:43.885251 1121493 retry.go:31] will retry after 2.493920061s: waiting for machine to come up
	I0729 19:43:46.380895 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:46.381249 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | unable to find current IP address of domain embed-certs-358053 in network mk-embed-certs-358053
	I0729 19:43:46.381283 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | I0729 19:43:46.381209 1121493 retry.go:31] will retry after 4.001159351s: waiting for machine to come up
	I0729 19:43:51.915678 1120587 start.go:364] duration metric: took 3m55.652628622s to acquireMachinesLock for "default-k8s-diff-port-024652"
	I0729 19:43:51.915763 1120587 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:43:51.915773 1120587 fix.go:54] fixHost starting: 
	I0729 19:43:51.916253 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:43:51.916303 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:43:51.933248 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36959
	I0729 19:43:51.933631 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:43:51.934146 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:43:51.934178 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:43:51.934512 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:43:51.934710 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:43:51.934882 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:43:51.936266 1120587 fix.go:112] recreateIfNeeded on default-k8s-diff-port-024652: state=Stopped err=<nil>
	I0729 19:43:51.936294 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	W0729 19:43:51.936471 1120587 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:43:51.938542 1120587 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-024652" ...
	I0729 19:43:50.387313 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.387631 1120280 main.go:141] libmachine: (embed-certs-358053) Found IP for machine: 192.168.61.201
	I0729 19:43:50.387649 1120280 main.go:141] libmachine: (embed-certs-358053) Reserving static IP address...
	I0729 19:43:50.387673 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has current primary IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.388059 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.388088 1120280 main.go:141] libmachine: (embed-certs-358053) Reserved static IP address: 192.168.61.201
	I0729 19:43:50.388122 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | skip adding static IP to network mk-embed-certs-358053 - found existing host DHCP lease matching {name: "embed-certs-358053", mac: "52:54:00:b7:9e:78", ip: "192.168.61.201"}
	I0729 19:43:50.388140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Getting to WaitForSSH function...
	I0729 19:43:50.388153 1120280 main.go:141] libmachine: (embed-certs-358053) Waiting for SSH to be available...
	I0729 19:43:50.389891 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390221 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.390251 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.390327 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH client type: external
	I0729 19:43:50.390358 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa (-rw-------)
	I0729 19:43:50.390384 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:43:50.390394 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | About to run SSH command:
	I0729 19:43:50.390403 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | exit 0
	I0729 19:43:50.519000 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | SSH cmd err, output: <nil>: 
	I0729 19:43:50.519409 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetConfigRaw
	I0729 19:43:50.520046 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.522297 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.522692 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.522946 1120280 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/config.json ...
	I0729 19:43:50.523145 1120280 machine.go:94] provisionDockerMachine start ...
	I0729 19:43:50.523164 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:50.523346 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.525235 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525608 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.525625 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.525729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.525897 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.526332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.526523 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.526751 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.526765 1120280 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:43:50.639176 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:43:50.639206 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639463 1120280 buildroot.go:166] provisioning hostname "embed-certs-358053"
	I0729 19:43:50.639489 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.639652 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.642218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642546 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.642573 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.642704 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.642896 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643034 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.643188 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.643396 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.643599 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.643615 1120280 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-358053 && echo "embed-certs-358053" | sudo tee /etc/hostname
	I0729 19:43:50.775163 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-358053
	
	I0729 19:43:50.775200 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.777834 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778140 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.778166 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.778337 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:50.778536 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778687 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:50.778818 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:50.778984 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:50.779150 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:50.779164 1120280 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-358053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-358053/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-358053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:43:50.899709 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:43:50.899756 1120280 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:43:50.899791 1120280 buildroot.go:174] setting up certificates
	I0729 19:43:50.899806 1120280 provision.go:84] configureAuth start
	I0729 19:43:50.899821 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetMachineName
	I0729 19:43:50.900090 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:50.902304 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902663 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.902695 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.902787 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:50.904815 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905150 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:50.905170 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:50.905279 1120280 provision.go:143] copyHostCerts
	I0729 19:43:50.905350 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:43:50.905366 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:43:50.905446 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:43:50.905561 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:43:50.905573 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:43:50.905626 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:43:50.905704 1120280 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:43:50.905713 1120280 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:43:50.905746 1120280 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:43:50.905815 1120280 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.embed-certs-358053 san=[127.0.0.1 192.168.61.201 embed-certs-358053 localhost minikube]
	I0729 19:43:51.198616 1120280 provision.go:177] copyRemoteCerts
	I0729 19:43:51.198692 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:43:51.198734 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.201272 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201527 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.201556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.201681 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.201876 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.202054 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.202170 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.290007 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:43:51.316649 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:43:51.340617 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:43:51.363465 1120280 provision.go:87] duration metric: took 463.642377ms to configureAuth
	I0729 19:43:51.363495 1120280 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:43:51.363700 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:43:51.363813 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.366478 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.366931 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.366973 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.367080 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.367280 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367445 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.367619 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.367818 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.368013 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.368034 1120280 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:43:51.670667 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:43:51.670700 1120280 machine.go:97] duration metric: took 1.147540887s to provisionDockerMachine
	I0729 19:43:51.670716 1120280 start.go:293] postStartSetup for "embed-certs-358053" (driver="kvm2")
	I0729 19:43:51.670728 1120280 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:43:51.670746 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.671114 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:43:51.671146 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.673820 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674154 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.674218 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.674406 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.674602 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.674761 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.674918 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.762013 1120280 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:43:51.766211 1120280 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:43:51.766238 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:43:51.766308 1120280 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:43:51.766408 1120280 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:43:51.766506 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:43:51.776086 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:51.800248 1120280 start.go:296] duration metric: took 129.516946ms for postStartSetup
	I0729 19:43:51.800288 1120280 fix.go:56] duration metric: took 20.54987709s for fixHost
	I0729 19:43:51.800332 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.802828 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.803155 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.803324 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.803552 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803729 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.803867 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.804024 1120280 main.go:141] libmachine: Using SSH client type: native
	I0729 19:43:51.804205 1120280 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.201 22 <nil> <nil>}
	I0729 19:43:51.804216 1120280 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:43:51.915515 1120280 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282231.873780587
	
	I0729 19:43:51.915538 1120280 fix.go:216] guest clock: 1722282231.873780587
	I0729 19:43:51.915546 1120280 fix.go:229] Guest: 2024-07-29 19:43:51.873780587 +0000 UTC Remote: 2024-07-29 19:43:51.800292219 +0000 UTC m=+271.768915474 (delta=73.488368ms)
	I0729 19:43:51.915567 1120280 fix.go:200] guest clock delta is within tolerance: 73.488368ms
	I0729 19:43:51.915573 1120280 start.go:83] releasing machines lock for "embed-certs-358053", held for 20.665188917s
	I0729 19:43:51.915605 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.915924 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:51.918637 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919022 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.919050 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.919227 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.919791 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920007 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:43:51.920098 1120280 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:43:51.920165 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.920246 1120280 ssh_runner.go:195] Run: cat /version.json
	I0729 19:43:51.920267 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:43:51.922800 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923102 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923134 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923173 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923250 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923437 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.923595 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:51.923615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:51.923720 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.923798 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:43:51.923873 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:51.923942 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:43:51.924064 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:43:51.924215 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:43:52.004661 1120280 ssh_runner.go:195] Run: systemctl --version
	I0729 19:43:52.032553 1120280 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:43:52.185919 1120280 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:43:52.191975 1120280 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:43:52.192059 1120280 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:43:52.210254 1120280 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:43:52.210276 1120280 start.go:495] detecting cgroup driver to use...
	I0729 19:43:52.210351 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:43:52.225580 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:43:52.238434 1120280 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:43:52.238501 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:43:52.252395 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:43:52.265503 1120280 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:43:52.376377 1120280 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:43:52.561796 1120280 docker.go:233] disabling docker service ...
	I0729 19:43:52.561859 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:43:52.579022 1120280 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:43:52.594679 1120280 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:43:52.734891 1120280 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:43:52.870161 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:43:52.884258 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:43:52.903923 1120280 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:43:52.903986 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.914530 1120280 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:43:52.914598 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.925740 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.936722 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.947290 1120280 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:43:52.959757 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.971452 1120280 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:52.990080 1120280 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:43:53.000701 1120280 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:43:53.010165 1120280 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:43:53.010271 1120280 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:43:53.023594 1120280 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:43:53.034500 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:53.173490 1120280 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:43:53.327789 1120280 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:43:53.327894 1120280 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:43:53.332682 1120280 start.go:563] Will wait 60s for crictl version
	I0729 19:43:53.332738 1120280 ssh_runner.go:195] Run: which crictl
	I0729 19:43:53.337397 1120280 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:43:53.387722 1120280 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:43:53.387824 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.416029 1120280 ssh_runner.go:195] Run: crio --version
	I0729 19:43:53.447686 1120280 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:43:53.448960 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetIP
	I0729 19:43:53.451993 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452334 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:43:53.452360 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:43:53.452626 1120280 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 19:43:53.456620 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:53.469521 1120280 kubeadm.go:883] updating cluster {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:43:53.469668 1120280 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:43:53.469726 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:53.510724 1120280 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:43:53.510793 1120280 ssh_runner.go:195] Run: which lz4
	I0729 19:43:53.515039 1120280 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:43:53.519349 1120280 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:43:53.519386 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:43:54.962294 1120280 crio.go:462] duration metric: took 1.447300807s to copy over tarball
	I0729 19:43:54.962368 1120280 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:43:51.939977 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Start
	I0729 19:43:51.940180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring networks are active...
	I0729 19:43:51.940939 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network default is active
	I0729 19:43:51.941232 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Ensuring network mk-default-k8s-diff-port-024652 is active
	I0729 19:43:51.941605 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Getting domain xml...
	I0729 19:43:51.942289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Creating domain...
	I0729 19:43:53.197317 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting to get IP...
	I0729 19:43:53.198285 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.198704 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.198613 1121645 retry.go:31] will retry after 305.319923ms: waiting for machine to come up
	I0729 19:43:53.505183 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505680 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.505711 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.505645 1121645 retry.go:31] will retry after 271.282913ms: waiting for machine to come up
	I0729 19:43:53.778388 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778870 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:53.778902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:53.778815 1121645 retry.go:31] will retry after 407.395474ms: waiting for machine to come up
	I0729 19:43:54.187668 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188110 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.188135 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.188063 1121645 retry.go:31] will retry after 515.272845ms: waiting for machine to come up
	I0729 19:43:54.704843 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705358 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:54.705386 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:54.705310 1121645 retry.go:31] will retry after 509.684919ms: waiting for machine to come up
	I0729 19:43:55.217156 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217667 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.217698 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.217604 1121645 retry.go:31] will retry after 728.323851ms: waiting for machine to come up
	I0729 19:43:55.947597 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948121 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:55.948155 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:55.948059 1121645 retry.go:31] will retry after 957.165998ms: waiting for machine to come up
	I0729 19:43:57.178620 1120280 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.216195072s)
	I0729 19:43:57.178653 1120280 crio.go:469] duration metric: took 2.216329763s to extract the tarball
	I0729 19:43:57.178660 1120280 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:43:57.216574 1120280 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:43:57.258341 1120280 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:43:57.258366 1120280 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:43:57.258376 1120280 kubeadm.go:934] updating node { 192.168.61.201 8443 v1.30.3 crio true true} ...
	I0729 19:43:57.258500 1120280 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-358053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:43:57.258563 1120280 ssh_runner.go:195] Run: crio config
	I0729 19:43:57.304755 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:43:57.304779 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:43:57.304793 1120280 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:43:57.304818 1120280 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.201 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-358053 NodeName:embed-certs-358053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:43:57.304975 1120280 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-358053"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.201
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.201"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:43:57.305058 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:43:57.314803 1120280 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:43:57.314914 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:43:57.324133 1120280 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0729 19:43:57.339975 1120280 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:43:57.355571 1120280 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0729 19:43:57.371806 1120280 ssh_runner.go:195] Run: grep 192.168.61.201	control-plane.minikube.internal$ /etc/hosts
	I0729 19:43:57.375459 1120280 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:43:57.386809 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:43:57.520182 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:43:57.536218 1120280 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053 for IP: 192.168.61.201
	I0729 19:43:57.536243 1120280 certs.go:194] generating shared ca certs ...
	I0729 19:43:57.536266 1120280 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:43:57.536463 1120280 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:43:57.536525 1120280 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:43:57.536539 1120280 certs.go:256] generating profile certs ...
	I0729 19:43:57.536702 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/client.key
	I0729 19:43:57.536777 1120280 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key.05ccddd9
	I0729 19:43:57.536836 1120280 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key
	I0729 19:43:57.537011 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:43:57.537060 1120280 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:43:57.537074 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:43:57.537109 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:43:57.537147 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:43:57.537184 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:43:57.537257 1120280 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:43:57.538120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:43:57.579679 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:43:57.610390 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:43:57.646234 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:43:57.680120 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 19:43:57.709780 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:43:57.737251 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:43:57.760519 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/embed-certs-358053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:43:57.782760 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:43:57.806628 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:43:57.831360 1120280 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:43:57.855485 1120280 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:43:57.873493 1120280 ssh_runner.go:195] Run: openssl version
	I0729 19:43:57.879376 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:43:57.891126 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895458 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.895501 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:43:57.901015 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:43:57.911165 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:43:57.921336 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925539 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.925601 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:43:57.930932 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:43:57.941138 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:43:57.951312 1120280 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955655 1120280 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.955699 1120280 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:43:57.961057 1120280 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:43:57.972742 1120280 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:43:57.977115 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:43:57.982921 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:43:57.988708 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:43:57.994618 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:43:58.000330 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:43:58.006024 1120280 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:43:58.011547 1120280 kubeadm.go:392] StartCluster: {Name:embed-certs-358053 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.3 ClusterName:embed-certs-358053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:43:58.011676 1120280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:43:58.011740 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.053520 1120280 cri.go:89] found id: ""
	I0729 19:43:58.053606 1120280 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:43:58.063799 1120280 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:43:58.063820 1120280 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:43:58.063881 1120280 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:43:58.073374 1120280 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:43:58.074705 1120280 kubeconfig.go:125] found "embed-certs-358053" server: "https://192.168.61.201:8443"
	I0729 19:43:58.077590 1120280 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:43:58.086714 1120280 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.201
	I0729 19:43:58.086751 1120280 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:43:58.086761 1120280 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:43:58.086809 1120280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:43:58.119740 1120280 cri.go:89] found id: ""
	I0729 19:43:58.119800 1120280 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:43:58.136919 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:43:58.146634 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:43:58.146655 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:43:58.146732 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:43:58.155526 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:43:58.155590 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:43:58.165016 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:43:58.173988 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:43:58.174042 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:43:58.183138 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.191680 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:43:58.191733 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:43:58.200557 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:43:58.209338 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:43:58.209390 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:43:58.218439 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:43:58.227653 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:58.340033 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.181947 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.381372 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.452293 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:43:59.570731 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:43:59.570823 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:43:56.907408 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:56.907953 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:56.907850 1121645 retry.go:31] will retry after 1.254959813s: waiting for machine to come up
	I0729 19:43:58.163969 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164402 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:58.164435 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:58.164335 1121645 retry.go:31] will retry after 1.194411522s: waiting for machine to come up
	I0729 19:43:59.360034 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360409 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:43:59.360444 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:43:59.360350 1121645 retry.go:31] will retry after 1.691293374s: waiting for machine to come up
	I0729 19:44:01.054480 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054922 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:01.054993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:01.054899 1121645 retry.go:31] will retry after 2.655959151s: waiting for machine to come up
	I0729 19:44:00.071291 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:00.571192 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.071004 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:01.086646 1120280 api_server.go:72] duration metric: took 1.515912855s to wait for apiserver process to appear ...
	I0729 19:44:01.086683 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:01.086713 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:01.087274 1120280 api_server.go:269] stopped: https://192.168.61.201:8443/healthz: Get "https://192.168.61.201:8443/healthz": dial tcp 192.168.61.201:8443: connect: connection refused
	I0729 19:44:01.587598 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:03.986744 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:03.986799 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:03.986814 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.029552 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:04.029601 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:04.087847 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.093457 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.093489 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:04.586941 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:04.609655 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:04.609700 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.087081 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.095282 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:05.095311 1120280 api_server.go:103] status: https://192.168.61.201:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:05.587782 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:44:05.593073 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:44:05.599042 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:05.599067 1120280 api_server.go:131] duration metric: took 4.512376511s to wait for apiserver health ...
	I0729 19:44:05.599076 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:44:05.599082 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:05.600932 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:03.713856 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714306 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:03.714363 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:03.714249 1121645 retry.go:31] will retry after 2.793831058s: waiting for machine to come up
	I0729 19:44:05.602066 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:05.612274 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:05.633293 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:05.646103 1120280 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:05.646143 1120280 system_pods.go:61] "coredns-7db6d8ff4d-q6jm9" [a0770baf-766d-4903-a21f-6a4c1b74fb9e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:05.646153 1120280 system_pods.go:61] "etcd-embed-certs-358053" [cc03bfb3-c1d6-480a-b169-599b7599a5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:05.646163 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8c45c66a-c954-4a84-9639-68210ad51a53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:05.646174 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [70266c42-fa7c-4936-b256-1eea65c57669] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:05.646181 1120280 system_pods.go:61] "kube-proxy-lb7hb" [e542b623-3db2-4be0-adf1-669932e6ac3d] Running
	I0729 19:44:05.646193 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [be79c03d-1e5a-46f5-a43a-671c37dea7d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:05.646201 1120280 system_pods.go:61] "metrics-server-569cc877fc-jsvnd" [0494cc85-12fa-4afa-ab39-5c1fafcc45f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:05.646209 1120280 system_pods.go:61] "storage-provisioner" [493de5d9-e761-49cb-b5f0-17d116b1a985] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:44:05.646221 1120280 system_pods.go:74] duration metric: took 12.906683ms to wait for pod list to return data ...
	I0729 19:44:05.646231 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:05.653103 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:05.653131 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:05.653161 1120280 node_conditions.go:105] duration metric: took 6.923325ms to run NodePressure ...
	I0729 19:44:05.653187 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:05.916138 1120280 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920383 1120280 kubeadm.go:739] kubelet initialised
	I0729 19:44:05.920402 1120280 kubeadm.go:740] duration metric: took 4.239377ms waiting for restarted kubelet to initialise ...
	I0729 19:44:05.920410 1120280 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:05.925752 1120280 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:07.932667 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:06.511186 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | unable to find current IP address of domain default-k8s-diff-port-024652 in network mk-default-k8s-diff-port-024652
	I0729 19:44:06.511583 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | I0729 19:44:06.511497 1121645 retry.go:31] will retry after 3.610819354s: waiting for machine to come up
	I0729 19:44:10.126488 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.126889 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Found IP for machine: 192.168.72.100
	I0729 19:44:10.126914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserving static IP address...
	I0729 19:44:10.126927 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has current primary IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.127289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Reserved static IP address: 192.168.72.100
	I0729 19:44:10.127313 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Waiting for SSH to be available...
	I0729 19:44:10.127342 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.127390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | skip adding static IP to network mk-default-k8s-diff-port-024652 - found existing host DHCP lease matching {name: "default-k8s-diff-port-024652", mac: "52:54:00:4c:73:cb", ip: "192.168.72.100"}
	I0729 19:44:10.127406 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Getting to WaitForSSH function...
	I0729 19:44:10.129180 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129499 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.129528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.129613 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH client type: external
	I0729 19:44:10.129633 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa (-rw-------)
	I0729 19:44:10.129676 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:10.129688 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | About to run SSH command:
	I0729 19:44:10.129700 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | exit 0
	I0729 19:44:10.254662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:10.255021 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetConfigRaw
	I0729 19:44:10.255656 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.257855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258219 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.258251 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.258526 1120587 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/config.json ...
	I0729 19:44:10.258713 1120587 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:10.258733 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:10.258968 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.260864 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261120 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.261149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.261275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.261460 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.261778 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.261932 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.262111 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.262121 1120587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:10.371225 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:10.371261 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371516 1120587 buildroot.go:166] provisioning hostname "default-k8s-diff-port-024652"
	I0729 19:44:10.371545 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.371756 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.374071 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374356 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.374391 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.374479 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.374654 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374808 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.374933 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.375126 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.375324 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.375338 1120587 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-024652 && echo "default-k8s-diff-port-024652" | sudo tee /etc/hostname
	I0729 19:44:10.499041 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-024652
	
	I0729 19:44:10.499075 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.501635 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.501943 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.501973 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.502136 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.502318 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502494 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.502669 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.502826 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.503019 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.503042 1120587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-024652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-024652/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-024652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:10.619637 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:10.619673 1120587 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:10.619708 1120587 buildroot.go:174] setting up certificates
	I0729 19:44:10.619719 1120587 provision.go:84] configureAuth start
	I0729 19:44:10.619728 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetMachineName
	I0729 19:44:10.620036 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:10.622502 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622810 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.622841 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.622932 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.625181 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625508 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.625531 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.625681 1120587 provision.go:143] copyHostCerts
	I0729 19:44:10.625743 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:10.625755 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:10.625825 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:10.625929 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:10.625937 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:10.625960 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:10.626015 1120587 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:10.626021 1120587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:10.626042 1120587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:10.626089 1120587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-024652 san=[127.0.0.1 192.168.72.100 default-k8s-diff-port-024652 localhost minikube]
	I0729 19:44:10.750576 1120587 provision.go:177] copyRemoteCerts
	I0729 19:44:10.750651 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:10.750713 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.753390 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753745 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.753791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.753942 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.754149 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.754330 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.754514 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:10.836524 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:10.861913 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0729 19:44:10.885539 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 19:44:10.909851 1120587 provision.go:87] duration metric: took 290.118473ms to configureAuth
	I0729 19:44:10.909880 1120587 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:10.910051 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:44:10.910127 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:10.912662 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.912962 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:10.912993 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:10.913224 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:10.913429 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913601 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:10.913744 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:10.913882 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:10.914096 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:10.914112 1120587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:11.419483 1120970 start.go:364] duration metric: took 3m13.335541366s to acquireMachinesLock for "old-k8s-version-021528"
	I0729 19:44:11.419549 1120970 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:11.419560 1120970 fix.go:54] fixHost starting: 
	I0729 19:44:11.419981 1120970 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:11.420020 1120970 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:11.437552 1120970 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44419
	I0729 19:44:11.437927 1120970 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:11.438424 1120970 main.go:141] libmachine: Using API Version  1
	I0729 19:44:11.438449 1120970 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:11.438787 1120970 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:11.438995 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:11.439201 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetState
	I0729 19:44:11.440476 1120970 fix.go:112] recreateIfNeeded on old-k8s-version-021528: state=Stopped err=<nil>
	I0729 19:44:11.440514 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	W0729 19:44:11.440692 1120970 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:11.442528 1120970 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-021528" ...
	I0729 19:44:11.181850 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:11.181877 1120587 machine.go:97] duration metric: took 923.15162ms to provisionDockerMachine
	I0729 19:44:11.181889 1120587 start.go:293] postStartSetup for "default-k8s-diff-port-024652" (driver="kvm2")
	I0729 19:44:11.181899 1120587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:11.181914 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.182289 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:11.182322 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.185275 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.185761 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.185791 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.186002 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.186282 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.186467 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.186620 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.268993 1120587 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:11.273072 1120587 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:11.273093 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:11.273161 1120587 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:11.273244 1120587 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:11.273353 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:11.282258 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:11.305957 1120587 start.go:296] duration metric: took 124.053991ms for postStartSetup
	I0729 19:44:11.305998 1120587 fix.go:56] duration metric: took 19.39022657s for fixHost
	I0729 19:44:11.306024 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.308452 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.308881 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.308902 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.309099 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.309321 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309507 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.309646 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.309836 1120587 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:11.310009 1120587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.100 22 <nil> <nil>}
	I0729 19:44:11.310021 1120587 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:11.419338 1120587 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282251.371238734
	
	I0729 19:44:11.419359 1120587 fix.go:216] guest clock: 1722282251.371238734
	I0729 19:44:11.419366 1120587 fix.go:229] Guest: 2024-07-29 19:44:11.371238734 +0000 UTC Remote: 2024-07-29 19:44:11.306004097 +0000 UTC m=+255.178971379 (delta=65.234637ms)
	I0729 19:44:11.419386 1120587 fix.go:200] guest clock delta is within tolerance: 65.234637ms
	I0729 19:44:11.419394 1120587 start.go:83] releasing machines lock for "default-k8s-diff-port-024652", held for 19.503660828s
	I0729 19:44:11.419418 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.419749 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:11.422054 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422377 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.422421 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.422552 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423087 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423284 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:44:11.423358 1120587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:11.423410 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.423511 1120587 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:11.423540 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:44:11.426070 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426323 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426440 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426471 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.426579 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.426774 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.426918 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.426957 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:11.426981 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:11.427069 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.427176 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:44:11.427343 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:44:11.427534 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:44:11.427700 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:44:11.536440 1120587 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:11.542493 1120587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:11.688795 1120587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:11.696783 1120587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:11.696855 1120587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:11.717067 1120587 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:11.717091 1120587 start.go:495] detecting cgroup driver to use...
	I0729 19:44:11.717157 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:11.735056 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:11.748999 1120587 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:11.749061 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:11.764244 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:11.778072 1120587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:11.893008 1120587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:12.053939 1120587 docker.go:233] disabling docker service ...
	I0729 19:44:12.054035 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:12.068666 1120587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:12.085766 1120587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:12.232278 1120587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:12.356403 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:12.370085 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:12.388817 1120587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:44:12.388879 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.399945 1120587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:12.400017 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.410117 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.422162 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.433170 1120587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:12.444386 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.455009 1120587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.472279 1120587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:12.482431 1120587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:12.492028 1120587 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:12.492097 1120587 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:12.505966 1120587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:12.515505 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:12.639691 1120587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:12.781358 1120587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:12.781427 1120587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:12.786218 1120587 start.go:563] Will wait 60s for crictl version
	I0729 19:44:12.786312 1120587 ssh_runner.go:195] Run: which crictl
	I0729 19:44:12.790056 1120587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:12.830355 1120587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:12.830451 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.859119 1120587 ssh_runner.go:195] Run: crio --version
	I0729 19:44:12.892473 1120587 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:44:11.443772 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .Start
	I0729 19:44:11.443926 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring networks are active...
	I0729 19:44:11.444570 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network default is active
	I0729 19:44:11.444890 1120970 main.go:141] libmachine: (old-k8s-version-021528) Ensuring network mk-old-k8s-version-021528 is active
	I0729 19:44:11.445234 1120970 main.go:141] libmachine: (old-k8s-version-021528) Getting domain xml...
	I0729 19:44:11.445994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Creating domain...
	I0729 19:44:12.696734 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting to get IP...
	I0729 19:44:12.697599 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.697967 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.698075 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.697953 1121841 retry.go:31] will retry after 228.228482ms: waiting for machine to come up
	I0729 19:44:12.927713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:12.928250 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:12.928280 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:12.928204 1121841 retry.go:31] will retry after 241.659418ms: waiting for machine to come up
	I0729 19:44:10.432255 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.932761 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:14.934282 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:12.893725 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetIP
	I0729 19:44:12.897014 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897401 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:44:12.897431 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:44:12.897621 1120587 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:12.902155 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:12.915460 1120587 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:12.915581 1120587 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:44:12.915631 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:12.956377 1120587 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:44:12.956444 1120587 ssh_runner.go:195] Run: which lz4
	I0729 19:44:12.960415 1120587 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:12.964785 1120587 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:12.964819 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:44:14.422427 1120587 crio.go:462] duration metric: took 1.462052598s to copy over tarball
	I0729 19:44:14.422514 1120587 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:13.171713 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.172206 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.172234 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.172165 1121841 retry.go:31] will retry after 475.69466ms: waiting for machine to come up
	I0729 19:44:13.649741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:13.650180 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:13.650210 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:13.650126 1121841 retry.go:31] will retry after 556.03832ms: waiting for machine to come up
	I0729 19:44:14.207549 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.208045 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.208080 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.207996 1121841 retry.go:31] will retry after 699.802636ms: waiting for machine to come up
	I0729 19:44:14.909153 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:14.909708 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:14.909736 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:14.909677 1121841 retry.go:31] will retry after 756.053302ms: waiting for machine to come up
	I0729 19:44:15.667015 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:15.667487 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:15.667518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:15.667434 1121841 retry.go:31] will retry after 729.442111ms: waiting for machine to come up
	I0729 19:44:16.398540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:16.399139 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:16.399191 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:16.399060 1121841 retry.go:31] will retry after 1.131574034s: waiting for machine to come up
	I0729 19:44:17.531966 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:17.532448 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:17.532480 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:17.532380 1121841 retry.go:31] will retry after 1.546547994s: waiting for machine to come up
	I0729 19:44:15.433310 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.433336 1120280 pod_ready.go:81] duration metric: took 9.507558167s for pod "coredns-7db6d8ff4d-q6jm9" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.433353 1120280 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438725 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.438747 1120280 pod_ready.go:81] duration metric: took 5.385786ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.438758 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444196 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:15.444214 1120280 pod_ready.go:81] duration metric: took 5.447798ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:15.444228 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452748 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.452772 1120280 pod_ready.go:81] duration metric: took 1.00853566s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.452784 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458635 1120280 pod_ready.go:92] pod "kube-proxy-lb7hb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.458653 1120280 pod_ready.go:81] duration metric: took 5.862242ms for pod "kube-proxy-lb7hb" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.458662 1120280 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631200 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:16.631229 1120280 pod_ready.go:81] duration metric: took 172.559322ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:16.631242 1120280 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:18.638680 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:16.739626 1120587 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.317075688s)
	I0729 19:44:16.739689 1120587 crio.go:469] duration metric: took 2.317215237s to extract the tarball
	I0729 19:44:16.739702 1120587 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:16.777698 1120587 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:16.825740 1120587 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:44:16.825768 1120587 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:44:16.825777 1120587 kubeadm.go:934] updating node { 192.168.72.100 8444 v1.30.3 crio true true} ...
	I0729 19:44:16.825933 1120587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-024652 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:16.826030 1120587 ssh_runner.go:195] Run: crio config
	I0729 19:44:16.873727 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:16.873752 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:16.873764 1120587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:16.873791 1120587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.100 APIServerPort:8444 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-024652 NodeName:default-k8s-diff-port-024652 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:44:16.873929 1120587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.100
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-024652"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:16.873990 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:44:16.884036 1120587 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:16.884126 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:16.893332 1120587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0729 19:44:16.911950 1120587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:16.930305 1120587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 19:44:16.948353 1120587 ssh_runner.go:195] Run: grep 192.168.72.100	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:16.952431 1120587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:16.964743 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:17.072244 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:17.088224 1120587 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652 for IP: 192.168.72.100
	I0729 19:44:17.088256 1120587 certs.go:194] generating shared ca certs ...
	I0729 19:44:17.088280 1120587 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:17.088482 1120587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:17.088563 1120587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:17.088579 1120587 certs.go:256] generating profile certs ...
	I0729 19:44:17.088738 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/client.key
	I0729 19:44:17.088823 1120587 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key.4c9c937f
	I0729 19:44:17.088876 1120587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key
	I0729 19:44:17.089049 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:17.089093 1120587 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:17.089109 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:17.089135 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:17.089156 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:17.089180 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:17.089218 1120587 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:17.089954 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:17.144094 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:17.191515 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:17.220210 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:17.252381 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 19:44:17.291881 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:17.334114 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:17.363726 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/default-k8s-diff-port-024652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:17.389190 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:17.413683 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:17.441739 1120587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:17.472609 1120587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:17.489059 1120587 ssh_runner.go:195] Run: openssl version
	I0729 19:44:17.495020 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:17.507133 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511759 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.511850 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:17.518120 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:17.528867 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:17.539695 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544063 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.544113 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:17.549785 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:17.560562 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:17.573597 1120587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578089 1120587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.578137 1120587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:17.583614 1120587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:17.594903 1120587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:17.599449 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:17.605325 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:17.611495 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:17.617663 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:17.623715 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:17.629845 1120587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:17.637607 1120587 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-024652 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.3 ClusterName:default-k8s-diff-port-024652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:17.637725 1120587 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:17.637778 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.685777 1120587 cri.go:89] found id: ""
	I0729 19:44:17.685877 1120587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:17.703296 1120587 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:17.703320 1120587 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:17.703387 1120587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:17.715928 1120587 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:17.717371 1120587 kubeconfig.go:125] found "default-k8s-diff-port-024652" server: "https://192.168.72.100:8444"
	I0729 19:44:17.720536 1120587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:17.732125 1120587 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.100
	I0729 19:44:17.732165 1120587 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:17.732207 1120587 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:17.732284 1120587 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:17.786419 1120587 cri.go:89] found id: ""
	I0729 19:44:17.786502 1120587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:17.804866 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:17.815092 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:17.815113 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:17.815189 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:44:17.824963 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:17.825020 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:17.835349 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:44:17.846227 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:17.846290 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:17.859231 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.870794 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:17.870883 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:17.882317 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:44:17.891702 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:17.891757 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:17.901153 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:17.911253 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:18.040695 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.054689 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.013955991s)
	I0729 19:44:19.054724 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.255112 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.346186 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:19.462795 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:19.462938 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:19.963927 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.463691 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:20.504478 1120587 api_server.go:72] duration metric: took 1.041683096s to wait for apiserver process to appear ...
	I0729 19:44:20.504523 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:44:20.504552 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:20.505202 1120587 api_server.go:269] stopped: https://192.168.72.100:8444/healthz: Get "https://192.168.72.100:8444/healthz": dial tcp 192.168.72.100:8444: connect: connection refused
	I0729 19:44:21.004771 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:19.081196 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:19.081719 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:19.081749 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:19.081668 1121841 retry.go:31] will retry after 2.079913941s: waiting for machine to come up
	I0729 19:44:21.163461 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:21.163980 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:21.164066 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:21.163965 1121841 retry.go:31] will retry after 2.355802923s: waiting for machine to come up
	I0729 19:44:20.638745 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:22.638835 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:23.789983 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.790018 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:23.790033 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:23.843047 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:44:23.843090 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:44:24.005370 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.009941 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.009973 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:24.505118 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:24.512838 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:24.512874 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.005014 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.023222 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:44:25.023264 1120587 api_server.go:103] status: https://192.168.72.100:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:44:25.504748 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:44:25.511449 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:44:25.521987 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:44:25.522018 1120587 api_server.go:131] duration metric: took 5.017487159s to wait for apiserver health ...
	I0729 19:44:25.522029 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:44:25.522038 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:25.523778 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:44:25.524925 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:44:25.541108 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:44:25.564225 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:44:25.574600 1120587 system_pods.go:59] 8 kube-system pods found
	I0729 19:44:25.574643 1120587 system_pods.go:61] "coredns-7db6d8ff4d-8mccr" [ce2eb102-1016-4a2d-8dee-561920c01b5a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:44:25.574664 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [f3c68e2f-7cef-4afc-bd26-3705afd16f01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:44:25.574676 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [656786e6-4ca6-45dc-9274-89ca8540c707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:44:25.574697 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [10b805dd-238a-49a8-8c3f-1c31004d56dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:44:25.574710 1120587 system_pods.go:61] "kube-proxy-l4g78" [c24c5bc0-131b-4d02-a0f1-d398723292eb] Running
	I0729 19:44:25.574717 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [5bb2daf3-9a22-4f80-95b6-ded3c31e872e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:44:25.574725 1120587 system_pods.go:61] "metrics-server-569cc877fc-bvkv6" [247c5a96-5bb3-4174-9219-a96591f53cbb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:44:25.574734 1120587 system_pods.go:61] "storage-provisioner" [a4f216b0-055a-4305-a93f-910a9a10e725] Running
	I0729 19:44:25.574744 1120587 system_pods.go:74] duration metric: took 10.494475ms to wait for pod list to return data ...
	I0729 19:44:25.574757 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:44:25.577735 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:44:25.577757 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:44:25.577778 1120587 node_conditions.go:105] duration metric: took 3.012688ms to run NodePressure ...
	I0729 19:44:25.577795 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:25.851094 1120587 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860023 1120587 kubeadm.go:739] kubelet initialised
	I0729 19:44:25.860050 1120587 kubeadm.go:740] duration metric: took 8.921765ms waiting for restarted kubelet to initialise ...
	I0729 19:44:25.860062 1120587 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:44:25.867130 1120587 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:23.523186 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:23.523741 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:23.523783 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:23.523684 1121841 retry.go:31] will retry after 2.899059572s: waiting for machine to come up
	I0729 19:44:26.426805 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:26.427211 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | unable to find current IP address of domain old-k8s-version-021528 in network mk-old-k8s-version-021528
	I0729 19:44:26.427267 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | I0729 19:44:26.427152 1121841 retry.go:31] will retry after 3.723478189s: waiting for machine to come up
	I0729 19:44:25.138056 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.139419 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.638107 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:27.872221 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:29.873611 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.571895 1119948 start.go:364] duration metric: took 55.319517148s to acquireMachinesLock for "no-preload-843792"
	I0729 19:44:31.571969 1119948 start.go:96] Skipping create...Using existing machine configuration
	I0729 19:44:31.571988 1119948 fix.go:54] fixHost starting: 
	I0729 19:44:31.572421 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:44:31.572460 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:44:31.589868 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0729 19:44:31.590253 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:44:31.590725 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:44:31.590752 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:44:31.591088 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:44:31.591274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:31.591398 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:44:31.592878 1119948 fix.go:112] recreateIfNeeded on no-preload-843792: state=Stopped err=<nil>
	I0729 19:44:31.592905 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	W0729 19:44:31.593054 1119948 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 19:44:31.594713 1119948 out.go:177] * Restarting existing kvm2 VM for "no-preload-843792" ...
	I0729 19:44:30.152545 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153061 1120970 main.go:141] libmachine: (old-k8s-version-021528) Found IP for machine: 192.168.39.65
	I0729 19:44:30.153088 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserving static IP address...
	I0729 19:44:30.153101 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has current primary IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.153518 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.153547 1120970 main.go:141] libmachine: (old-k8s-version-021528) Reserved static IP address: 192.168.39.65
	I0729 19:44:30.153567 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | skip adding static IP to network mk-old-k8s-version-021528 - found existing host DHCP lease matching {name: "old-k8s-version-021528", mac: "52:54:00:12:c7:d2", ip: "192.168.39.65"}
	I0729 19:44:30.153606 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Getting to WaitForSSH function...
	I0729 19:44:30.153646 1120970 main.go:141] libmachine: (old-k8s-version-021528) Waiting for SSH to be available...
	I0729 19:44:30.155687 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.155938 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.155968 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.156104 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH client type: external
	I0729 19:44:30.156126 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa (-rw-------)
	I0729 19:44:30.156157 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.65 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:30.156170 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | About to run SSH command:
	I0729 19:44:30.156179 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | exit 0
	I0729 19:44:30.286787 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:30.287161 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetConfigRaw
	I0729 19:44:30.287816 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.290268 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290614 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.290645 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.290866 1120970 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/config.json ...
	I0729 19:44:30.291054 1120970 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:30.291074 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:30.291307 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.293399 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293729 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.293759 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.293872 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.294064 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294228 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.294362 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.294510 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.294729 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.294741 1120970 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:30.406918 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:30.406947 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407214 1120970 buildroot.go:166] provisioning hostname "old-k8s-version-021528"
	I0729 19:44:30.407256 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.407478 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.410077 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410396 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.410421 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.410586 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.410766 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.410932 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.411068 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.411245 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.411488 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.411503 1120970 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-021528 && echo "old-k8s-version-021528" | sudo tee /etc/hostname
	I0729 19:44:30.541004 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-021528
	
	I0729 19:44:30.541037 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.543946 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544343 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.544372 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.544503 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.544694 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.544856 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.545032 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.545233 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:30.545409 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:30.545424 1120970 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-021528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-021528/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-021528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:30.665246 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:30.665281 1120970 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:30.665317 1120970 buildroot.go:174] setting up certificates
	I0729 19:44:30.665328 1120970 provision.go:84] configureAuth start
	I0729 19:44:30.665339 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetMachineName
	I0729 19:44:30.665621 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:30.668162 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668540 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.668568 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.668743 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.670898 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671447 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.671471 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.671618 1120970 provision.go:143] copyHostCerts
	I0729 19:44:30.671691 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:30.671710 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:30.671790 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:30.671907 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:30.671917 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:30.671953 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:30.672043 1120970 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:30.672052 1120970 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:30.672085 1120970 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:30.672166 1120970 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-021528 san=[127.0.0.1 192.168.39.65 localhost minikube old-k8s-version-021528]
	I0729 19:44:30.888016 1120970 provision.go:177] copyRemoteCerts
	I0729 19:44:30.888072 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:30.888115 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:30.890739 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891115 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:30.891148 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:30.891288 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:30.891499 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:30.891689 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:30.891862 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:30.976898 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:31.000793 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 19:44:31.024837 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:31.048325 1120970 provision.go:87] duration metric: took 382.981006ms to configureAuth
	I0729 19:44:31.048358 1120970 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:31.048560 1120970 config.go:182] Loaded profile config "old-k8s-version-021528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:44:31.048640 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.051230 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051576 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.051605 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.051754 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.051994 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052191 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.052368 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.052568 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.052828 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.052853 1120970 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:31.320227 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:31.320259 1120970 machine.go:97] duration metric: took 1.0291903s to provisionDockerMachine
	I0729 19:44:31.320276 1120970 start.go:293] postStartSetup for "old-k8s-version-021528" (driver="kvm2")
	I0729 19:44:31.320297 1120970 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:31.320335 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.320669 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:31.320702 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.323379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323774 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.323807 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.323903 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.324112 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.324291 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.324431 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.415208 1120970 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:31.419884 1120970 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:31.419911 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:31.419981 1120970 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:31.420093 1120970 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:31.420214 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:31.431055 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:31.454082 1120970 start.go:296] duration metric: took 133.793908ms for postStartSetup
	I0729 19:44:31.454120 1120970 fix.go:56] duration metric: took 20.034560069s for fixHost
	I0729 19:44:31.454147 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.456757 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457097 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.457130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.457284 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.457528 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457737 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.457853 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.458027 1120970 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:31.458189 1120970 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.65 22 <nil> <nil>}
	I0729 19:44:31.458199 1120970 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:31.571713 1120970 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282271.544930204
	
	I0729 19:44:31.571744 1120970 fix.go:216] guest clock: 1722282271.544930204
	I0729 19:44:31.571758 1120970 fix.go:229] Guest: 2024-07-29 19:44:31.544930204 +0000 UTC Remote: 2024-07-29 19:44:31.454125155 +0000 UTC m=+213.509073295 (delta=90.805049ms)
	I0729 19:44:31.571785 1120970 fix.go:200] guest clock delta is within tolerance: 90.805049ms
	I0729 19:44:31.571791 1120970 start.go:83] releasing machines lock for "old-k8s-version-021528", held for 20.152267504s
	I0729 19:44:31.571817 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.572102 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:31.575385 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.575790 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.575815 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.576012 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576508 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576692 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .DriverName
	I0729 19:44:31.576786 1120970 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:31.576828 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.576918 1120970 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:31.576940 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHHostname
	I0729 19:44:31.579737 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.579994 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580091 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580130 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580379 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:31.580409 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:31.580418 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580577 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHPort
	I0729 19:44:31.580661 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580838 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHKeyPath
	I0729 19:44:31.580879 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581025 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetSSHUsername
	I0729 19:44:31.581021 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.581164 1120970 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/old-k8s-version-021528/id_rsa Username:docker}
	I0729 19:44:31.682902 1120970 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:31.688675 1120970 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:31.836374 1120970 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:31.844215 1120970 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:31.844275 1120970 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:31.864647 1120970 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:31.864671 1120970 start.go:495] detecting cgroup driver to use...
	I0729 19:44:31.864744 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:31.881197 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:31.895022 1120970 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:31.895085 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:31.908584 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:31.922321 1120970 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:32.039427 1120970 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:32.203236 1120970 docker.go:233] disabling docker service ...
	I0729 19:44:32.203335 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:32.217523 1120970 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:32.236065 1120970 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:32.355769 1120970 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:32.473160 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:32.486314 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:32.504270 1120970 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 19:44:32.504359 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.514928 1120970 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:32.514995 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.528822 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.543599 1120970 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:32.555853 1120970 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:32.568184 1120970 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:32.577443 1120970 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:32.577580 1120970 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:32.590636 1120970 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:32.600995 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:32.739544 1120970 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:32.886433 1120970 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:32.886507 1120970 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:32.892072 1120970 start.go:563] Will wait 60s for crictl version
	I0729 19:44:32.892137 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:32.896003 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:32.939843 1120970 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:32.939934 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.968301 1120970 ssh_runner.go:195] Run: crio --version
	I0729 19:44:32.995612 1120970 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 19:44:31.595855 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Start
	I0729 19:44:31.596024 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring networks are active...
	I0729 19:44:31.596802 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network default is active
	I0729 19:44:31.597159 1119948 main.go:141] libmachine: (no-preload-843792) Ensuring network mk-no-preload-843792 is active
	I0729 19:44:31.597570 1119948 main.go:141] libmachine: (no-preload-843792) Getting domain xml...
	I0729 19:44:31.598244 1119948 main.go:141] libmachine: (no-preload-843792) Creating domain...
	I0729 19:44:32.903649 1119948 main.go:141] libmachine: (no-preload-843792) Waiting to get IP...
	I0729 19:44:32.904535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:32.905024 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:32.905113 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:32.904992 1122027 retry.go:31] will retry after 213.578895ms: waiting for machine to come up
	I0729 19:44:33.120474 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.120922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.121007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.120907 1122027 retry.go:31] will retry after 265.999253ms: waiting for machine to come up
	I0729 19:44:33.388577 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.389007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.389026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.388967 1122027 retry.go:31] will retry after 393.491378ms: waiting for machine to come up
	I0729 19:44:31.639857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:34.139327 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:31.874661 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:33.875758 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:35.875952 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:32.996971 1120970 main.go:141] libmachine: (old-k8s-version-021528) Calling .GetIP
	I0729 19:44:33.000232 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000668 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:c7:d2", ip: ""} in network mk-old-k8s-version-021528: {Iface:virbr1 ExpiryTime:2024-07-29 20:44:23 +0000 UTC Type:0 Mac:52:54:00:12:c7:d2 Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:old-k8s-version-021528 Clientid:01:52:54:00:12:c7:d2}
	I0729 19:44:33.000694 1120970 main.go:141] libmachine: (old-k8s-version-021528) DBG | domain old-k8s-version-021528 has defined IP address 192.168.39.65 and MAC address 52:54:00:12:c7:d2 in network mk-old-k8s-version-021528
	I0729 19:44:33.000856 1120970 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:33.005258 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:33.018698 1120970 kubeadm.go:883] updating cluster {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:33.018840 1120970 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:44:33.018934 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:33.089122 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:33.089197 1120970 ssh_runner.go:195] Run: which lz4
	I0729 19:44:33.093346 1120970 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 19:44:33.097766 1120970 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:44:33.097802 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 19:44:34.739542 1120970 crio.go:462] duration metric: took 1.646235601s to copy over tarball
	I0729 19:44:34.739647 1120970 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:44:37.734665 1120970 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.994963407s)
	I0729 19:44:37.734702 1120970 crio.go:469] duration metric: took 2.995126134s to extract the tarball
	I0729 19:44:37.734712 1120970 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:44:37.781443 1120970 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:37.820392 1120970 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 19:44:37.820426 1120970 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:37.820531 1120970 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.820610 1120970 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0729 19:44:37.820708 1120970 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.820536 1120970 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.820560 1120970 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.820541 1120970 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.820573 1120970 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.820587 1120970 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:37.822309 1120970 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:37.822313 1120970 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.822326 1120970 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 19:44:37.822397 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:37.822432 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:37.822438 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.822301 1120970 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:33.785078 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:33.785626 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:33.785654 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:33.785530 1122027 retry.go:31] will retry after 411.274676ms: waiting for machine to come up
	I0729 19:44:34.198884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.199471 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.199512 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.199421 1122027 retry.go:31] will retry after 600.076128ms: waiting for machine to come up
	I0729 19:44:34.801378 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:34.801839 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:34.801869 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:34.801792 1122027 retry.go:31] will retry after 948.350912ms: waiting for machine to come up
	I0729 19:44:35.751533 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:35.752085 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:35.752110 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:35.752021 1122027 retry.go:31] will retry after 1.166250352s: waiting for machine to come up
	I0729 19:44:36.919771 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:36.920240 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:36.920271 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:36.920184 1122027 retry.go:31] will retry after 1.061620812s: waiting for machine to come up
	I0729 19:44:37.983051 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:37.983501 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:37.983528 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:37.983453 1122027 retry.go:31] will retry after 1.814167152s: waiting for machine to come up
	I0729 19:44:36.140059 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:38.642436 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.873768 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.873792 1120587 pod_ready.go:81] duration metric: took 12.006637701s for pod "coredns-7db6d8ff4d-8mccr" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.873804 1120587 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879758 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.879787 1120587 pod_ready.go:81] duration metric: took 5.974837ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.879799 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885027 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.885051 1120587 pod_ready.go:81] duration metric: took 5.244169ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.885064 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890208 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.890224 1120587 pod_ready.go:81] duration metric: took 5.152571ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.890232 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894663 1120587 pod_ready.go:92] pod "kube-proxy-l4g78" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:37.894682 1120587 pod_ready.go:81] duration metric: took 4.444758ms for pod "kube-proxy-l4g78" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:37.894691 1120587 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272098 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:44:38.272127 1120587 pod_ready.go:81] duration metric: took 377.428879ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:38.272141 1120587 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	I0729 19:44:40.279623 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:37.982782 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:37.994565 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:37.997227 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:37.997536 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.011221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.028869 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.031221 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 19:44:38.054537 1120970 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 19:44:38.054599 1120970 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.054660 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.104843 1120970 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:38.182008 1120970 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 19:44:38.182064 1120970 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.182063 1120970 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 19:44:38.182113 1120970 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.182118 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.182161 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190604 1120970 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 19:44:38.190629 1120970 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 19:44:38.190652 1120970 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.190663 1120970 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.190703 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.190710 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.197293 1120970 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 19:44:38.197328 1120970 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.197364 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.226035 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.228343 1120970 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 19:44:38.228420 1120970 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 19:44:38.228467 1120970 ssh_runner.go:195] Run: which crictl
	I0729 19:44:38.335524 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.335607 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.335627 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.335696 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.335705 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.335790 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.335866 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.483885 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.483976 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.483926 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.484028 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.487155 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.487223 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 19:44:38.487241 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.635433 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 19:44:38.649661 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 19:44:38.649751 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 19:44:38.649769 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 19:44:38.649831 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 19:44:38.649921 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 19:44:38.649958 1120970 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 19:44:38.783607 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 19:44:38.783694 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 19:44:38.783605 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 19:44:38.791756 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 19:44:38.791863 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 19:44:38.791892 1120970 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 19:44:38.791939 1120970 cache_images.go:92] duration metric: took 971.499203ms to LoadCachedImages
	W0729 19:44:38.792037 1120970 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 19:44:38.792054 1120970 kubeadm.go:934] updating node { 192.168.39.65 8443 v1.20.0 crio true true} ...
	I0729 19:44:38.792200 1120970 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-021528 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:44:38.792313 1120970 ssh_runner.go:195] Run: crio config
	I0729 19:44:38.841459 1120970 cni.go:84] Creating CNI manager for ""
	I0729 19:44:38.841484 1120970 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:44:38.841496 1120970 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:44:38.841515 1120970 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.65 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-021528 NodeName:old-k8s-version-021528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.65"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 19:44:38.841678 1120970 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-021528"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.65"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:44:38.841743 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 19:44:38.852338 1120970 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:44:38.852412 1120970 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:44:38.862150 1120970 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0729 19:44:38.881108 1120970 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:44:38.899034 1120970 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0729 19:44:38.917965 1120970 ssh_runner.go:195] Run: grep 192.168.39.65	control-plane.minikube.internal$ /etc/hosts
	I0729 19:44:38.922064 1120970 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.65	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:38.935009 1120970 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:39.058886 1120970 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:44:39.078830 1120970 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528 for IP: 192.168.39.65
	I0729 19:44:39.078902 1120970 certs.go:194] generating shared ca certs ...
	I0729 19:44:39.078943 1120970 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.079139 1120970 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:44:39.079228 1120970 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:44:39.079243 1120970 certs.go:256] generating profile certs ...
	I0729 19:44:39.079418 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/client.key
	I0729 19:44:39.079517 1120970 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key.1bfec4c5
	I0729 19:44:39.079603 1120970 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key
	I0729 19:44:39.079814 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:44:39.079899 1120970 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:44:39.079924 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:44:39.079974 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:44:39.080079 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:44:39.080137 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:44:39.080230 1120970 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:39.081417 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:44:39.117623 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:44:39.163823 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:44:39.198978 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:44:39.229583 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 19:44:39.270285 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 19:44:39.320906 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:44:39.358597 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/old-k8s-version-021528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:44:39.384152 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:44:39.409176 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:44:39.434095 1120970 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:44:39.473901 1120970 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:44:39.493117 1120970 ssh_runner.go:195] Run: openssl version
	I0729 19:44:39.499390 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:44:39.513884 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519775 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.519841 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:44:39.526146 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:44:39.538303 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:44:39.549569 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554063 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.554125 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:44:39.560167 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:44:39.572332 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:44:39.583635 1120970 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588045 1120970 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.588126 1120970 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:44:39.594105 1120970 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:44:39.605557 1120970 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:44:39.610321 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:44:39.616786 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:44:39.622941 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:44:39.629109 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:44:39.636558 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:44:39.643073 1120970 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:44:39.648878 1120970 kubeadm.go:392] StartCluster: {Name:old-k8s-version-021528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-021528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.65 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:44:39.648982 1120970 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:44:39.649027 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.690983 1120970 cri.go:89] found id: ""
	I0729 19:44:39.691075 1120970 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:44:39.701985 1120970 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:44:39.702004 1120970 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:44:39.702052 1120970 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:44:39.712284 1120970 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:44:39.713416 1120970 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-021528" does not appear in /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:44:39.714247 1120970 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1055011/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-021528" cluster setting kubeconfig missing "old-k8s-version-021528" context setting]
	I0729 19:44:39.715298 1120970 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:44:39.762122 1120970 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:44:39.773851 1120970 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.65
	I0729 19:44:39.773894 1120970 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:44:39.773910 1120970 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:44:39.773968 1120970 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:44:39.820190 1120970 cri.go:89] found id: ""
	I0729 19:44:39.820273 1120970 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:44:39.838497 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:44:39.849060 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:44:39.849087 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:44:39.849142 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:44:39.858834 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:44:39.858920 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:44:39.869962 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:44:39.879690 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:44:39.879754 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:44:39.889334 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.900671 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:44:39.900789 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:44:39.910365 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:44:39.920056 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:44:39.920119 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:44:39.929792 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:44:39.939719 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.078003 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:40.827477 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.064614 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.168296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:44:41.280875 1120970 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:44:41.280964 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:41.781878 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.281683 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:42.781105 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:39.799833 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:39.800226 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:39.800256 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:39.800187 1122027 retry.go:31] will retry after 1.661406441s: waiting for machine to come up
	I0729 19:44:41.464164 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:41.464664 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:41.464704 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:41.464586 1122027 retry.go:31] will retry after 2.292148862s: waiting for machine to come up
	I0729 19:44:41.139627 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.640525 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:42.780035 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:45.278957 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:43.281753 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.781580 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.281856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:44.781202 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.281035 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:45.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.281414 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:46.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.281665 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:47.782033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:43.759566 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:43.760021 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:43.760080 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:43.759994 1122027 retry.go:31] will retry after 3.005985721s: waiting for machine to come up
	I0729 19:44:46.767337 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:46.767822 1119948 main.go:141] libmachine: (no-preload-843792) DBG | unable to find current IP address of domain no-preload-843792 in network mk-no-preload-843792
	I0729 19:44:46.767852 1119948 main.go:141] libmachine: (no-preload-843792) DBG | I0729 19:44:46.767767 1122027 retry.go:31] will retry after 3.516453969s: waiting for machine to come up
	I0729 19:44:46.138988 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.637828 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:47.778809 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:50.278817 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:48.281371 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:48.781991 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.281260 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:49.782025 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.281498 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.281653 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:51.781015 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.281638 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:52.782023 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:50.287884 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288381 1119948 main.go:141] libmachine: (no-preload-843792) Found IP for machine: 192.168.50.248
	I0729 19:44:50.288402 1119948 main.go:141] libmachine: (no-preload-843792) Reserving static IP address...
	I0729 19:44:50.288417 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has current primary IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.288858 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.288891 1119948 main.go:141] libmachine: (no-preload-843792) DBG | skip adding static IP to network mk-no-preload-843792 - found existing host DHCP lease matching {name: "no-preload-843792", mac: "52:54:00:ae:0e:8c", ip: "192.168.50.248"}
	I0729 19:44:50.288905 1119948 main.go:141] libmachine: (no-preload-843792) Reserved static IP address: 192.168.50.248
	I0729 19:44:50.288921 1119948 main.go:141] libmachine: (no-preload-843792) Waiting for SSH to be available...
	I0729 19:44:50.288937 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Getting to WaitForSSH function...
	I0729 19:44:50.291447 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291802 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.291831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.291992 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH client type: external
	I0729 19:44:50.292026 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa (-rw-------)
	I0729 19:44:50.292056 1119948 main.go:141] libmachine: (no-preload-843792) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:44:50.292075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | About to run SSH command:
	I0729 19:44:50.292089 1119948 main.go:141] libmachine: (no-preload-843792) DBG | exit 0
	I0729 19:44:50.419030 1119948 main.go:141] libmachine: (no-preload-843792) DBG | SSH cmd err, output: <nil>: 
	I0729 19:44:50.419420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetConfigRaw
	I0729 19:44:50.420149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.422461 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.422860 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.422897 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.423068 1119948 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/config.json ...
	I0729 19:44:50.423254 1119948 machine.go:94] provisionDockerMachine start ...
	I0729 19:44:50.423273 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:50.423513 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.425759 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.425996 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.426033 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.426136 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.426323 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426493 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.426682 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.426889 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.427107 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.427119 1119948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 19:44:50.539215 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 19:44:50.539250 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539523 1119948 buildroot.go:166] provisioning hostname "no-preload-843792"
	I0729 19:44:50.539553 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.539755 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.542621 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543007 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.543036 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.543188 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.543365 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543574 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.543751 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.543900 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.544060 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.544072 1119948 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-843792 && echo "no-preload-843792" | sudo tee /etc/hostname
	I0729 19:44:50.669012 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-843792
	
	I0729 19:44:50.669054 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.671768 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.672105 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.672278 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:50.672481 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672647 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:50.672734 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:50.672904 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:50.673077 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:50.673091 1119948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-843792' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-843792/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-843792' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:44:50.796568 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:44:50.796605 1119948 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1055011/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1055011/.minikube}
	I0729 19:44:50.796625 1119948 buildroot.go:174] setting up certificates
	I0729 19:44:50.796639 1119948 provision.go:84] configureAuth start
	I0729 19:44:50.796648 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetMachineName
	I0729 19:44:50.796934 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:50.799731 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.800071 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.800263 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:50.802572 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.802922 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:50.802955 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:50.803085 1119948 provision.go:143] copyHostCerts
	I0729 19:44:50.803156 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem, removing ...
	I0729 19:44:50.803170 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem
	I0729 19:44:50.803225 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.pem (1082 bytes)
	I0729 19:44:50.803347 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem, removing ...
	I0729 19:44:50.803355 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem
	I0729 19:44:50.803379 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/cert.pem (1123 bytes)
	I0729 19:44:50.803438 1119948 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem, removing ...
	I0729 19:44:50.803445 1119948 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem
	I0729 19:44:50.803461 1119948 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1055011/.minikube/key.pem (1679 bytes)
	I0729 19:44:50.803524 1119948 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem org=jenkins.no-preload-843792 san=[127.0.0.1 192.168.50.248 localhost minikube no-preload-843792]
	I0729 19:44:51.214202 1119948 provision.go:177] copyRemoteCerts
	I0729 19:44:51.214287 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:44:51.214320 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.216944 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217214 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.217237 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.217360 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.217563 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.217732 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.217891 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.301968 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0729 19:44:51.328160 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 19:44:51.353256 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:44:51.378426 1119948 provision.go:87] duration metric: took 581.77356ms to configureAuth
	I0729 19:44:51.378457 1119948 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:44:51.378660 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 19:44:51.378746 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.381760 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382286 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.382308 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.382555 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.382787 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383071 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.383230 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.383438 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.383649 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.383673 1119948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:44:51.650635 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:44:51.650669 1119948 machine.go:97] duration metric: took 1.227400866s to provisionDockerMachine
	I0729 19:44:51.650686 1119948 start.go:293] postStartSetup for "no-preload-843792" (driver="kvm2")
	I0729 19:44:51.650704 1119948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:44:51.650733 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.651068 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:44:51.651098 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.653656 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654044 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.654075 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.654215 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.654414 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.654603 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.654783 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.738250 1119948 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:44:51.742463 1119948 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:44:51.742494 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/addons for local assets ...
	I0729 19:44:51.742575 1119948 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1055011/.minikube/files for local assets ...
	I0729 19:44:51.742670 1119948 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem -> 10622722.pem in /etc/ssl/certs
	I0729 19:44:51.742762 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 19:44:51.752428 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:44:51.778026 1119948 start.go:296] duration metric: took 127.323599ms for postStartSetup
	I0729 19:44:51.778070 1119948 fix.go:56] duration metric: took 20.206081869s for fixHost
	I0729 19:44:51.778101 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.780831 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781222 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.781264 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.781433 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.781634 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781807 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.781978 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.782165 1119948 main.go:141] libmachine: Using SSH client type: native
	I0729 19:44:51.782343 1119948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.248 22 <nil> <nil>}
	I0729 19:44:51.782354 1119948 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 19:44:51.891547 1119948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722282291.842464810
	
	I0729 19:44:51.891577 1119948 fix.go:216] guest clock: 1722282291.842464810
	I0729 19:44:51.891585 1119948 fix.go:229] Guest: 2024-07-29 19:44:51.84246481 +0000 UTC Remote: 2024-07-29 19:44:51.778076789 +0000 UTC m=+358.114888914 (delta=64.388021ms)
	I0729 19:44:51.891637 1119948 fix.go:200] guest clock delta is within tolerance: 64.388021ms
	I0729 19:44:51.891648 1119948 start.go:83] releasing machines lock for "no-preload-843792", held for 20.319710656s
	I0729 19:44:51.891677 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.891952 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:51.894800 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895181 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.895216 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.895390 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.895840 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896042 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:44:51.896139 1119948 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:44:51.896192 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.896258 1119948 ssh_runner.go:195] Run: cat /version.json
	I0729 19:44:51.896287 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:44:51.898856 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899180 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899208 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899261 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899313 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.899474 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.899638 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.899716 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:51.899742 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:51.899815 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.899865 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:44:51.900009 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:44:51.900149 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:44:51.900317 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:44:51.979915 1119948 ssh_runner.go:195] Run: systemctl --version
	I0729 19:44:52.002705 1119948 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:44:52.146695 1119948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:44:52.152507 1119948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:44:52.152566 1119948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:44:52.169058 1119948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:44:52.169085 1119948 start.go:495] detecting cgroup driver to use...
	I0729 19:44:52.169148 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:44:52.185675 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:44:52.204654 1119948 docker.go:217] disabling cri-docker service (if available) ...
	I0729 19:44:52.204719 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:44:52.221485 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:44:52.235452 1119948 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:44:52.353806 1119948 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:44:52.504237 1119948 docker.go:233] disabling docker service ...
	I0729 19:44:52.504314 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:44:52.520145 1119948 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:44:52.533007 1119948 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:44:52.662886 1119948 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:44:52.795773 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:44:52.810135 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:44:52.829290 1119948 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 19:44:52.829356 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.840657 1119948 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:44:52.840718 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.851174 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.861565 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.871901 1119948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:44:52.882929 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.893517 1119948 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.910321 1119948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:44:52.920773 1119948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:44:52.930425 1119948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:44:52.930467 1119948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:44:52.943382 1119948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:44:52.953528 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:44:53.086573 1119948 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:44:53.222264 1119948 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:44:53.222358 1119948 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:44:53.227019 1119948 start.go:563] Will wait 60s for crictl version
	I0729 19:44:53.227079 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.230920 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:44:53.271242 1119948 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:44:53.271338 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.301110 1119948 ssh_runner.go:195] Run: crio --version
	I0729 19:44:53.333725 1119948 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 19:44:53.334659 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetIP
	I0729 19:44:53.337115 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337559 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:44:53.337593 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:44:53.337844 1119948 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 19:44:53.341989 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:44:53.355060 1119948 kubeadm.go:883] updating cluster {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:44:53.355229 1119948 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:44:53.355288 1119948 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:44:53.388980 1119948 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 19:44:53.389006 1119948 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 19:44:53.389048 1119948 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.389101 1119948 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.389112 1119948 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.389137 1119948 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.389119 1119948 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.389271 1119948 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.389350 1119948 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.389605 1119948 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390514 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.390570 1119948 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.390602 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.390527 1119948 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.390706 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.390732 1119948 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 19:44:53.390767 1119948 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.391084 1119948 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.549235 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.572353 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.579226 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.596966 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.609083 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 19:44:53.616167 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.618946 1119948 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 19:44:53.618985 1119948 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.619029 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.635187 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.670750 1119948 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 19:44:53.670796 1119948 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.670859 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.672585 1119948 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 19:44:53.672626 1119948 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.672669 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.695596 1119948 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 19:44:53.695640 1119948 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.695685 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:51.138015 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.638298 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:52.279881 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:54.778657 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:53.281345 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.781221 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.281939 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:54.781091 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.281282 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:55.781375 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.282072 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:56.781207 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.281436 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:57.781372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:53.720675 1119948 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840593 1119948 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 19:44:53.840643 1119948 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.840672 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.840687 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840775 1119948 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 19:44:53.840812 1119948 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:53.840821 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.840857 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840879 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.840923 1119948 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 19:44:53.840940 1119948 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.840957 1119948 ssh_runner.go:195] Run: which crictl
	I0729 19:44:53.840924 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918733 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:53.918808 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:53.918822 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:53.918738 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:53.918756 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:53.934123 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:53.934149 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.071240 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 19:44:54.071338 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 19:44:54.071326 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 19:44:54.071427 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.093839 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 19:44:54.093863 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.210655 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.210775 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.212134 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 19:44:54.217809 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.217912 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:44:54.217935 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:54.218206 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.218301 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:54.260623 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 19:44:54.260652 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260652 1119948 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 19:44:54.260686 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 19:44:54.260778 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 19:44:54.260865 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:44:54.306379 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 19:44:54.306385 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306392 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 19:44:54.306493 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:44:54.306689 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 19:44:54.306778 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (2.313899996s)
	I0729 19:44:56.574645 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 19:44:56.574650 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.313771552s)
	I0729 19:44:56.574670 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574611 1119948 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0: (2.313935705s)
	I0729 19:44:56.574683 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 19:44:56.574705 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.268197753s)
	I0729 19:44:56.574716 1119948 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 19:44:56.574719 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 19:44:56.574722 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 19:44:56.574739 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0: (2.267948475s)
	I0729 19:44:56.574750 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 19:44:56.574796 1119948 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:44:58.641782 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.067036887s)
	I0729 19:44:58.641818 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 19:44:58.641845 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:58.641846 1119948 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (2.0670173s)
	I0729 19:44:58.641878 1119948 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 19:44:58.641896 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 19:44:56.140488 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.637284 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:57.279852 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:59.777891 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:44:58.281852 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:58.781637 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.281892 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:44:59.781645 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.281405 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.782060 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.281396 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:01.781327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.281709 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:02.781786 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:00.096431 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (1.454505335s)
	I0729 19:45:00.096482 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 19:45:00.096522 1119948 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:00.096568 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 19:45:01.962972 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.866379068s)
	I0729 19:45:01.963000 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 19:45:01.963026 1119948 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:01.963078 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 19:45:02.916627 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 19:45:02.916678 1119948 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:02.916735 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 19:45:00.638676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.137885 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:01.779615 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:04.279431 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:03.281567 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:03.781335 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.281681 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:04.781803 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.281115 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:05.781161 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.281699 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.781869 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.281182 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:07.781016 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:06.397189 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.480421154s)
	I0729 19:45:06.397236 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 19:45:06.397280 1119948 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:06.397357 1119948 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 19:45:08.272053 1119948 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.874662469s)
	I0729 19:45:08.272086 1119948 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 19:45:08.272116 1119948 cache_images.go:123] Successfully loaded all cached images
	I0729 19:45:08.272123 1119948 cache_images.go:92] duration metric: took 14.883104578s to LoadCachedImages
	I0729 19:45:08.272135 1119948 kubeadm.go:934] updating node { 192.168.50.248 8443 v1.31.0-beta.0 crio true true} ...
	I0729 19:45:08.272293 1119948 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-843792 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:45:08.272378 1119948 ssh_runner.go:195] Run: crio config
	I0729 19:45:08.340838 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:08.340864 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:08.340876 1119948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:45:08.340905 1119948 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.248 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-843792 NodeName:no-preload-843792 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.248"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.248 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:45:08.341094 1119948 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.248
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-843792"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.248
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.248"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:45:08.341175 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 19:45:08.353738 1119948 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:45:08.353819 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:45:08.365340 1119948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 19:45:08.383516 1119948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 19:45:08.401060 1119948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 19:45:08.419420 1119948 ssh_runner.go:195] Run: grep 192.168.50.248	control-plane.minikube.internal$ /etc/hosts
	I0729 19:45:08.423355 1119948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.248	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:45:08.437286 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:45:08.569176 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:45:08.586925 1119948 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792 for IP: 192.168.50.248
	I0729 19:45:08.586949 1119948 certs.go:194] generating shared ca certs ...
	I0729 19:45:08.586969 1119948 certs.go:226] acquiring lock for ca certs: {Name:mkd1f0b3d7e82ac23e713dd6b75409e103935b02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:45:08.587196 1119948 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key
	I0729 19:45:08.587277 1119948 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key
	I0729 19:45:08.587294 1119948 certs.go:256] generating profile certs ...
	I0729 19:45:08.587388 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/client.key
	I0729 19:45:08.587476 1119948 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key.f52ec7e5
	I0729 19:45:08.587520 1119948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key
	I0729 19:45:08.587686 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem (1338 bytes)
	W0729 19:45:08.587731 1119948 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272_empty.pem, impossibly tiny 0 bytes
	I0729 19:45:08.587741 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:45:08.587764 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/ca.pem (1082 bytes)
	I0729 19:45:08.587788 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:45:08.587807 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/key.pem (1679 bytes)
	I0729 19:45:08.587842 1119948 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem (1708 bytes)
	I0729 19:45:08.588560 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:45:08.618457 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0729 19:45:08.664632 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:45:08.696094 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 19:45:05.639914 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.138498 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:06.779766 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.781373 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:10.782303 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:08.281476 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.781100 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.281248 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:09.781661 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.281141 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:10.781357 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.281922 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.781751 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.281024 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.781942 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:08.732476 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 19:45:08.761190 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:45:08.792866 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:45:08.819753 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/no-preload-843792/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 19:45:08.844891 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/certs/1062272.pem --> /usr/share/ca-certificates/1062272.pem (1338 bytes)
	I0729 19:45:08.868688 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/ssl/certs/10622722.pem --> /usr/share/ca-certificates/10622722.pem (1708 bytes)
	I0729 19:45:08.893523 1119948 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:45:08.917663 1119948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:45:08.935488 1119948 ssh_runner.go:195] Run: openssl version
	I0729 19:45:08.941415 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1062272.pem && ln -fs /usr/share/ca-certificates/1062272.pem /etc/ssl/certs/1062272.pem"
	I0729 19:45:08.952713 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957226 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 18:30 /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.957288 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1062272.pem
	I0729 19:45:08.963014 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1062272.pem /etc/ssl/certs/51391683.0"
	I0729 19:45:08.974542 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10622722.pem && ln -fs /usr/share/ca-certificates/10622722.pem /etc/ssl/certs/10622722.pem"
	I0729 19:45:08.985605 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990121 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 18:30 /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.990170 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10622722.pem
	I0729 19:45:08.995715 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10622722.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 19:45:09.006949 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:45:09.018222 1119948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023160 1119948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 18:17 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.023225 1119948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:45:09.028770 1119948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:45:09.039653 1119948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:45:09.044577 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 19:45:09.050692 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 19:45:09.057177 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 19:45:09.063464 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 19:45:09.069732 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 19:45:09.075998 1119948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 19:45:09.081759 1119948 kubeadm.go:392] StartCluster: {Name:no-preload-843792 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0-beta.0 ClusterName:no-preload-843792 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0
m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:45:09.081855 1119948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:45:09.081922 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.121153 1119948 cri.go:89] found id: ""
	I0729 19:45:09.121242 1119948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:45:09.131866 1119948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 19:45:09.131892 1119948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 19:45:09.131951 1119948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 19:45:09.142306 1119948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:45:09.143769 1119948 kubeconfig.go:125] found "no-preload-843792" server: "https://192.168.50.248:8443"
	I0729 19:45:09.146733 1119948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 19:45:09.156058 1119948 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.248
	I0729 19:45:09.156096 1119948 kubeadm.go:1160] stopping kube-system containers ...
	I0729 19:45:09.156113 1119948 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 19:45:09.156171 1119948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:45:09.204791 1119948 cri.go:89] found id: ""
	I0729 19:45:09.204881 1119948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 19:45:09.222988 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:45:09.234800 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:45:09.234825 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:45:09.234898 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:45:09.244868 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:45:09.244931 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:45:09.255368 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:45:09.265442 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:45:09.265515 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:45:09.276827 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.287989 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:45:09.288057 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:45:09.297736 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:45:09.307856 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:45:09.307923 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:45:09.318101 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:45:09.328189 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:09.441974 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.593961 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.151939649s)
	I0729 19:45:10.594045 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.807397 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.880145 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:10.962104 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:45:10.962209 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.462937 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:11.962909 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:12.006882 1119948 api_server.go:72] duration metric: took 1.044780287s to wait for apiserver process to appear ...
	I0729 19:45:12.006918 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:45:12.006945 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:12.007577 1119948 api_server.go:269] stopped: https://192.168.50.248:8443/healthz: Get "https://192.168.50.248:8443/healthz": dial tcp 192.168.50.248:8443: connect: connection refused
	I0729 19:45:12.507374 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:10.637684 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:12.638011 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:14.638569 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:13.278494 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.778675 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:15.042675 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.042710 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.042731 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.090118 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 19:45:15.090151 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 19:45:15.507702 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:15.512794 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:15.512822 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.008064 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.018543 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.018578 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:16.508055 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:16.519925 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 19:45:16.519954 1119948 api_server.go:103] status: https://192.168.50.248:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 19:45:17.007959 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:45:17.013159 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:45:17.022691 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:45:17.022726 1119948 api_server.go:131] duration metric: took 5.015799715s to wait for apiserver health ...
	I0729 19:45:17.022737 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:45:17.022746 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:45:17.024618 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:45:13.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:13.781128 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.281372 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:14.781037 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.281715 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:15.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.281845 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:16.781224 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.281710 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.781353 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:17.025951 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:45:17.037020 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:45:17.075438 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:45:17.098501 1119948 system_pods.go:59] 8 kube-system pods found
	I0729 19:45:17.098541 1119948 system_pods.go:61] "coredns-5cfdc65f69-j6m2k" [1fb28c80-116d-46b7-a939-6ff4ffa80883] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0729 19:45:17.098549 1119948 system_pods.go:61] "etcd-no-preload-843792" [68470ab3-9513-4504-9d1e-dbb896b8ae6b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 19:45:17.098557 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [6cc37d70-bc14-4a06-987d-320a2a11b533] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 19:45:17.098563 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [5c115624-c9e9-4019-9783-35cc825fb1df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 19:45:17.098570 1119948 system_pods.go:61] "kube-proxy-6kzvz" [4f0006c3-1172-48b6-8631-643090032c58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 19:45:17.098579 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [5c2a4c59-a525-4246-9d11-50fddef53815] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 19:45:17.098584 1119948 system_pods.go:61] "metrics-server-78fcd8795b-pcx9w" [7d138038-71ad-4279-9562-f3864d5a0024] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:45:17.098591 1119948 system_pods.go:61] "storage-provisioner" [289822fa-8ed4-4abe-970e-8b6d9a9fa51e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 19:45:17.098598 1119948 system_pods.go:74] duration metric: took 23.126612ms to wait for pod list to return data ...
	I0729 19:45:17.098610 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:45:17.125364 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:45:17.125395 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:45:17.125405 1119948 node_conditions.go:105] duration metric: took 26.790642ms to run NodePressure ...
	I0729 19:45:17.125425 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 19:45:17.467261 1119948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478831 1119948 kubeadm.go:739] kubelet initialised
	I0729 19:45:17.478871 1119948 kubeadm.go:740] duration metric: took 11.576985ms waiting for restarted kubelet to initialise ...
	I0729 19:45:17.478883 1119948 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:45:17.483948 1119948 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:16.639536 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.641996 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.279857 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:20.779054 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:18.281504 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:18.781826 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.281901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.782011 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.281384 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:20.781352 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.281834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:21.781603 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.281152 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:22.781351 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:19.493011 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.992979 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:21.139438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.636771 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:22.779640 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:24.780814 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:23.281111 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:23.781931 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.281455 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.781346 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.281633 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:25.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.281145 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:26.781235 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.281327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:27.781099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:24.491231 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:26.991237 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.490384 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.490413 1119948 pod_ready.go:81] duration metric: took 11.006435855s for pod "coredns-5cfdc65f69-j6m2k" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.490425 1119948 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495144 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.495168 1119948 pod_ready.go:81] duration metric: took 4.736893ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.495177 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499249 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:28.499272 1119948 pod_ready.go:81] duration metric: took 4.089379ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:28.499280 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:25.637886 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.138043 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:27.279850 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:29.778397 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:28.281600 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:28.781033 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.281086 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.781358 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.281478 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:30.781094 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.281816 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:31.781092 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.281012 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:32.781266 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:29.505726 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.505752 1119948 pod_ready.go:81] duration metric: took 1.0064644s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.505764 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510705 1119948 pod_ready.go:92] pod "kube-proxy-6kzvz" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.510725 1119948 pod_ready.go:81] duration metric: took 4.953497ms for pod "kube-proxy-6kzvz" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.510735 1119948 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688555 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:45:29.688579 1119948 pod_ready.go:81] duration metric: took 177.837031ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:29.688593 1119948 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	I0729 19:45:31.695505 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:30.637213 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:32.638747 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:31.778641 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:34.277964 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:33.281410 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:33.781923 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.281471 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.781303 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.281404 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:35.781727 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.281960 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:36.781632 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.281624 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:37.781232 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:34.196033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.697003 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:35.137135 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:37.137857 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:39.138563 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:36.278607 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.278960 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:40.280428 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:38.281103 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:38.781134 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.281907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:39.781863 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.281104 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:40.781928 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:41.281757 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:41.281864 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:41.322903 1120970 cri.go:89] found id: ""
	I0729 19:45:41.322929 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.322938 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:41.322945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:41.323016 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:41.359651 1120970 cri.go:89] found id: ""
	I0729 19:45:41.359679 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.359687 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:41.359692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:41.359744 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:41.402317 1120970 cri.go:89] found id: ""
	I0729 19:45:41.402358 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.402370 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:41.402380 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:41.402454 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:41.438796 1120970 cri.go:89] found id: ""
	I0729 19:45:41.438823 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.438833 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:41.438839 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:41.438931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:41.477648 1120970 cri.go:89] found id: ""
	I0729 19:45:41.477677 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.477685 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:41.477692 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:41.477761 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:41.517603 1120970 cri.go:89] found id: ""
	I0729 19:45:41.517635 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.517646 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:41.517654 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:41.517727 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:41.553106 1120970 cri.go:89] found id: ""
	I0729 19:45:41.553140 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.553151 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:41.553158 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:41.553226 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:41.595007 1120970 cri.go:89] found id: ""
	I0729 19:45:41.595035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:41.595044 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:41.595054 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:41.595069 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:41.634927 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:41.634966 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:41.685871 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:41.685906 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:41.700701 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:41.700735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:41.816575 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:41.816598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:41.816611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:39.199863 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.200303 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:43.695592 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:41.637651 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.138141 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:42.778550 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.779186 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:44.396592 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:44.410567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:44.410644 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:44.447450 1120970 cri.go:89] found id: ""
	I0729 19:45:44.447487 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.447499 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:44.447507 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:44.447579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:44.487679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.487714 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.487725 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:44.487732 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:44.487806 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:44.527170 1120970 cri.go:89] found id: ""
	I0729 19:45:44.527211 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.527219 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:44.527226 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:44.527282 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:44.567585 1120970 cri.go:89] found id: ""
	I0729 19:45:44.567613 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.567622 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:44.567629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:44.567680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:44.605003 1120970 cri.go:89] found id: ""
	I0729 19:45:44.605031 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.605041 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:44.605049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:44.605121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:44.643862 1120970 cri.go:89] found id: ""
	I0729 19:45:44.643887 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.643894 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:44.643901 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:44.643950 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:44.679814 1120970 cri.go:89] found id: ""
	I0729 19:45:44.679845 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.679855 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:44.679862 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:44.679926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:44.714679 1120970 cri.go:89] found id: ""
	I0729 19:45:44.714709 1120970 logs.go:276] 0 containers: []
	W0729 19:45:44.714719 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:44.714729 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:44.714747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:44.766381 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:44.766424 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:44.782337 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:44.782369 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:44.854487 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:44.854509 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:44.854522 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:44.935043 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:44.935082 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:47.481158 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:47.496559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:47.496649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:47.531949 1120970 cri.go:89] found id: ""
	I0729 19:45:47.531981 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.531990 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:47.531996 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:47.532050 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:47.571424 1120970 cri.go:89] found id: ""
	I0729 19:45:47.571451 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.571459 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:47.571465 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:47.571517 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:47.610439 1120970 cri.go:89] found id: ""
	I0729 19:45:47.610474 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.610485 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:47.610494 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:47.610561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:47.648351 1120970 cri.go:89] found id: ""
	I0729 19:45:47.648380 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.648388 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:47.648395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:47.648458 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:47.686610 1120970 cri.go:89] found id: ""
	I0729 19:45:47.686646 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.686658 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:47.686667 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:47.686739 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:47.722870 1120970 cri.go:89] found id: ""
	I0729 19:45:47.722901 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.722909 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:47.722916 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:47.722978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:47.757651 1120970 cri.go:89] found id: ""
	I0729 19:45:47.757690 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.757700 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:47.757709 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:47.757787 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:47.792737 1120970 cri.go:89] found id: ""
	I0729 19:45:47.792767 1120970 logs.go:276] 0 containers: []
	W0729 19:45:47.792776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:47.792786 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:47.792799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:47.867707 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:47.867734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:47.867751 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:47.949876 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:47.949918 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:45.696302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.194324 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:46.637438 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:48.637749 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.279986 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:49.778293 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:47.991014 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:47.991053 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:48.041713 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:48.041752 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.557028 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:50.571918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:50.572012 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:50.608752 1120970 cri.go:89] found id: ""
	I0729 19:45:50.608783 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.608791 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:50.608798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:50.608851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:50.644225 1120970 cri.go:89] found id: ""
	I0729 19:45:50.644251 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.644261 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:50.644269 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:50.644357 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:50.680364 1120970 cri.go:89] found id: ""
	I0729 19:45:50.680400 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.680412 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:50.680420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:50.680487 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:50.724418 1120970 cri.go:89] found id: ""
	I0729 19:45:50.724443 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.724451 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:50.724457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:50.724513 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:50.768891 1120970 cri.go:89] found id: ""
	I0729 19:45:50.768924 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.768935 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:50.768943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:50.769011 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:50.815814 1120970 cri.go:89] found id: ""
	I0729 19:45:50.815847 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.815858 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:50.815866 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:50.815935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:50.856823 1120970 cri.go:89] found id: ""
	I0729 19:45:50.856856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.856865 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:50.856871 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:50.856935 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:50.890567 1120970 cri.go:89] found id: ""
	I0729 19:45:50.890618 1120970 logs.go:276] 0 containers: []
	W0729 19:45:50.890631 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:50.890646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:50.890662 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:50.944060 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:50.944095 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:50.957881 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:50.957912 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:51.036005 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:51.036033 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:51.036051 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:51.117269 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:51.117311 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:50.195926 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.197099 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:50.639185 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.138398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:52.278704 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:54.279094 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:53.657518 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:53.671405 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:53.671499 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:53.713703 1120970 cri.go:89] found id: ""
	I0729 19:45:53.713734 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.713747 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:53.713755 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:53.713820 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:53.752821 1120970 cri.go:89] found id: ""
	I0729 19:45:53.752856 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.752867 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:53.752875 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:53.752930 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:53.792144 1120970 cri.go:89] found id: ""
	I0729 19:45:53.792172 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.792198 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:53.792204 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:53.792264 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:53.831123 1120970 cri.go:89] found id: ""
	I0729 19:45:53.831151 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.831161 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:53.831168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:53.831223 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:53.870716 1120970 cri.go:89] found id: ""
	I0729 19:45:53.870747 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.870758 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:53.870766 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:53.870831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:53.909567 1120970 cri.go:89] found id: ""
	I0729 19:45:53.909602 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.909611 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:53.909619 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:53.909679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:53.944134 1120970 cri.go:89] found id: ""
	I0729 19:45:53.944167 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.944179 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:53.944188 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:53.944249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:53.979274 1120970 cri.go:89] found id: ""
	I0729 19:45:53.979307 1120970 logs.go:276] 0 containers: []
	W0729 19:45:53.979319 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:53.979330 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:53.979347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:54.027783 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:54.027822 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:54.079319 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:54.079368 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:54.094387 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:54.094420 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:54.170700 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:54.170723 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:54.170737 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:56.756947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:56.775456 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:56.775539 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:56.830999 1120970 cri.go:89] found id: ""
	I0729 19:45:56.831035 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.831046 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:56.831054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:56.831144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:56.868006 1120970 cri.go:89] found id: ""
	I0729 19:45:56.868039 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.868057 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:56.868065 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:56.868145 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:56.905275 1120970 cri.go:89] found id: ""
	I0729 19:45:56.905311 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.905322 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:56.905330 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:56.905401 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:45:56.938507 1120970 cri.go:89] found id: ""
	I0729 19:45:56.938537 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.938546 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:45:56.938553 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:45:56.938607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:45:56.974424 1120970 cri.go:89] found id: ""
	I0729 19:45:56.974456 1120970 logs.go:276] 0 containers: []
	W0729 19:45:56.974468 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:45:56.974476 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:45:56.974543 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:45:57.008152 1120970 cri.go:89] found id: ""
	I0729 19:45:57.008191 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.008203 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:45:57.008211 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:45:57.008281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:45:57.043904 1120970 cri.go:89] found id: ""
	I0729 19:45:57.043942 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.043953 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:45:57.043961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:45:57.044038 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:45:57.078239 1120970 cri.go:89] found id: ""
	I0729 19:45:57.078268 1120970 logs.go:276] 0 containers: []
	W0729 19:45:57.078277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:45:57.078286 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:45:57.078299 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:45:57.125135 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:45:57.125170 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:45:57.177926 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:45:57.177968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:45:57.192316 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:45:57.192354 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:45:57.267034 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:45:57.267059 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:45:57.267078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:45:54.213977 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.695532 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:55.637424 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:58.137534 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:56.780087 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.278164 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:45:59.849254 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:45:59.863328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:45:59.863437 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:45:59.900024 1120970 cri.go:89] found id: ""
	I0729 19:45:59.900051 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.900060 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:45:59.900067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:45:59.900128 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.935272 1120970 cri.go:89] found id: ""
	I0729 19:45:59.935308 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.935319 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:45:59.935328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:45:59.935404 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:45:59.967684 1120970 cri.go:89] found id: ""
	I0729 19:45:59.967712 1120970 logs.go:276] 0 containers: []
	W0729 19:45:59.967725 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:45:59.967733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:45:59.967791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:00.003354 1120970 cri.go:89] found id: ""
	I0729 19:46:00.003386 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.003397 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:00.003404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:00.003479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:00.042266 1120970 cri.go:89] found id: ""
	I0729 19:46:00.042311 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.042330 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:00.042344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:00.042419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:00.081056 1120970 cri.go:89] found id: ""
	I0729 19:46:00.081085 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.081095 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:00.081102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:00.081179 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:00.114102 1120970 cri.go:89] found id: ""
	I0729 19:46:00.114138 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.114153 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:00.114161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:00.114229 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:00.152891 1120970 cri.go:89] found id: ""
	I0729 19:46:00.152919 1120970 logs.go:276] 0 containers: []
	W0729 19:46:00.152930 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:00.152942 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:00.152961 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:00.225895 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:00.225926 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:00.225944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:00.306359 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:00.306397 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:00.348266 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:00.348305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:00.401402 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:00.401452 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:02.917392 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:02.931221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:02.931308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:02.965808 1120970 cri.go:89] found id: ""
	I0729 19:46:02.965839 1120970 logs.go:276] 0 containers: []
	W0729 19:46:02.965850 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:02.965857 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:02.965924 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:45:59.195460 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.195742 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.196310 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:00.138417 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:02.637927 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:01.278771 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.279480 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.778549 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:03.003125 1120970 cri.go:89] found id: ""
	I0729 19:46:03.003152 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.003161 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:03.003168 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:03.003222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:03.042782 1120970 cri.go:89] found id: ""
	I0729 19:46:03.042816 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.042827 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:03.042835 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:03.042922 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:03.082857 1120970 cri.go:89] found id: ""
	I0729 19:46:03.082891 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.082910 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:03.082918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:03.082975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:03.118096 1120970 cri.go:89] found id: ""
	I0729 19:46:03.118127 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.118147 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:03.118156 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:03.118228 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:03.155950 1120970 cri.go:89] found id: ""
	I0729 19:46:03.155983 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.155995 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:03.156003 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:03.156076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:03.192698 1120970 cri.go:89] found id: ""
	I0729 19:46:03.192729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.192741 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:03.192749 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:03.192822 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:03.230228 1120970 cri.go:89] found id: ""
	I0729 19:46:03.230261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:03.230275 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:03.230292 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:03.230310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:03.269169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:03.269204 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:03.325724 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:03.325765 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:03.339955 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:03.339986 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:03.415795 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:03.415823 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:03.415839 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:06.002947 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:06.017334 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:06.017422 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:06.051132 1120970 cri.go:89] found id: ""
	I0729 19:46:06.051161 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.051169 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:06.051182 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:06.051248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:06.085156 1120970 cri.go:89] found id: ""
	I0729 19:46:06.085185 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.085194 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:06.085200 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:06.085252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:06.122263 1120970 cri.go:89] found id: ""
	I0729 19:46:06.122296 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.122303 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:06.122309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:06.122377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:06.158066 1120970 cri.go:89] found id: ""
	I0729 19:46:06.158093 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.158102 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:06.158109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:06.158161 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:06.193082 1120970 cri.go:89] found id: ""
	I0729 19:46:06.193109 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.193117 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:06.193125 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:06.193188 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:06.226239 1120970 cri.go:89] found id: ""
	I0729 19:46:06.226276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.226285 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:06.226292 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:06.226346 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:06.262648 1120970 cri.go:89] found id: ""
	I0729 19:46:06.262686 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.262697 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:06.262703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:06.262769 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:06.304018 1120970 cri.go:89] found id: ""
	I0729 19:46:06.304047 1120970 logs.go:276] 0 containers: []
	W0729 19:46:06.304056 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:06.304066 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:06.304078 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:06.345240 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:06.345269 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:06.399728 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:06.399768 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:06.415271 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:06.415312 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:06.492320 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:06.492342 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:06.492361 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:05.695149 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.196040 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:05.136979 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:07.137588 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.140728 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:08.278537 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:10.278751 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:09.070966 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:09.084876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:09.084957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:09.123177 1120970 cri.go:89] found id: ""
	I0729 19:46:09.123209 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.123220 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:09.123227 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:09.123300 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:09.162546 1120970 cri.go:89] found id: ""
	I0729 19:46:09.162593 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.162605 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:09.162614 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:09.162682 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:09.198047 1120970 cri.go:89] found id: ""
	I0729 19:46:09.198075 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.198084 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:09.198091 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:09.198165 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:09.231929 1120970 cri.go:89] found id: ""
	I0729 19:46:09.231962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.231973 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:09.231982 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:09.232051 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:09.269543 1120970 cri.go:89] found id: ""
	I0729 19:46:09.269574 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.269585 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:09.269593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:09.269665 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:09.304012 1120970 cri.go:89] found id: ""
	I0729 19:46:09.304042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.304051 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:09.304057 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:09.304110 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:09.340266 1120970 cri.go:89] found id: ""
	I0729 19:46:09.340302 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.340315 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:09.340323 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:09.340402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:09.373855 1120970 cri.go:89] found id: ""
	I0729 19:46:09.373884 1120970 logs.go:276] 0 containers: []
	W0729 19:46:09.373892 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:09.373902 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:09.373916 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:09.434007 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:09.434047 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:09.448138 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:09.448168 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:09.523836 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:09.523866 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:09.523884 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:09.605562 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:09.605602 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.147573 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:12.162219 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:12.162307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:12.197420 1120970 cri.go:89] found id: ""
	I0729 19:46:12.197446 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.197454 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:12.197460 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:12.197511 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:12.236008 1120970 cri.go:89] found id: ""
	I0729 19:46:12.236042 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.236052 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:12.236058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:12.236125 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:12.279184 1120970 cri.go:89] found id: ""
	I0729 19:46:12.279208 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.279216 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:12.279222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:12.279273 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:12.319020 1120970 cri.go:89] found id: ""
	I0729 19:46:12.319061 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.319072 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:12.319083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:12.319140 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:12.354552 1120970 cri.go:89] found id: ""
	I0729 19:46:12.354591 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.354600 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:12.354606 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:12.354664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:12.389196 1120970 cri.go:89] found id: ""
	I0729 19:46:12.389232 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.389242 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:12.389251 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:12.389351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:12.425713 1120970 cri.go:89] found id: ""
	I0729 19:46:12.425751 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.425767 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:12.425776 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:12.425851 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:12.461092 1120970 cri.go:89] found id: ""
	I0729 19:46:12.461123 1120970 logs.go:276] 0 containers: []
	W0729 19:46:12.461132 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:12.461142 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:12.461162 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:12.537550 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:12.537594 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:12.578558 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:12.578597 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:12.629269 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:12.629310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:12.644176 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:12.644202 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:12.717070 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:10.695776 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.696260 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:11.637812 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:14.137356 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:12.778309 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.278853 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:15.218239 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:15.232163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:15.232236 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:15.268490 1120970 cri.go:89] found id: ""
	I0729 19:46:15.268520 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.268532 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:15.268539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:15.268621 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:15.303437 1120970 cri.go:89] found id: ""
	I0729 19:46:15.303473 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.303485 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:15.303493 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:15.303557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:15.340676 1120970 cri.go:89] found id: ""
	I0729 19:46:15.340706 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.340717 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:15.340725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:15.340798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:15.376731 1120970 cri.go:89] found id: ""
	I0729 19:46:15.376764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.376775 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:15.376783 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:15.376854 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:15.412493 1120970 cri.go:89] found id: ""
	I0729 19:46:15.412524 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.412533 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:15.412541 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:15.412614 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:15.448795 1120970 cri.go:89] found id: ""
	I0729 19:46:15.448830 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.448842 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:15.448850 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:15.448923 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:15.484048 1120970 cri.go:89] found id: ""
	I0729 19:46:15.484082 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.484100 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:15.484108 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:15.484172 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:15.520340 1120970 cri.go:89] found id: ""
	I0729 19:46:15.520370 1120970 logs.go:276] 0 containers: []
	W0729 19:46:15.520380 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:15.520389 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:15.520408 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:15.568837 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:15.568877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:15.582958 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:15.582993 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:15.653880 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:15.653901 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:15.653920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:15.732652 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:15.732691 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:15.194855 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.196069 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:16.137961 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.139896 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:17.779000 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:19.779635 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:18.273795 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:18.288991 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:18.289066 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:18.327583 1120970 cri.go:89] found id: ""
	I0729 19:46:18.327619 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.327631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:18.327639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:18.327716 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:18.361476 1120970 cri.go:89] found id: ""
	I0729 19:46:18.361504 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.361515 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:18.361523 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:18.361590 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:18.401842 1120970 cri.go:89] found id: ""
	I0729 19:46:18.401873 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.401884 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:18.401892 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:18.401965 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:18.439870 1120970 cri.go:89] found id: ""
	I0729 19:46:18.439905 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.439920 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:18.439929 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:18.440015 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:18.474916 1120970 cri.go:89] found id: ""
	I0729 19:46:18.474944 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.474953 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:18.474960 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:18.475033 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:18.509957 1120970 cri.go:89] found id: ""
	I0729 19:46:18.509984 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.509993 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:18.509999 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:18.510064 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:18.545521 1120970 cri.go:89] found id: ""
	I0729 19:46:18.545551 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.545564 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:18.545573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:18.545646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:18.579041 1120970 cri.go:89] found id: ""
	I0729 19:46:18.579072 1120970 logs.go:276] 0 containers: []
	W0729 19:46:18.579080 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:18.579091 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:18.579104 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:18.653041 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:18.653063 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:18.653077 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:18.732969 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:18.733035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:18.773700 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:18.773735 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:18.826511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:18.826553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.340974 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:21.354608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:21.354671 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:21.388765 1120970 cri.go:89] found id: ""
	I0729 19:46:21.388795 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.388806 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:21.388814 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:21.388909 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:21.426734 1120970 cri.go:89] found id: ""
	I0729 19:46:21.426764 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.426776 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:21.426784 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:21.426861 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:21.462965 1120970 cri.go:89] found id: ""
	I0729 19:46:21.462999 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.463010 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:21.463018 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:21.463087 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:21.496933 1120970 cri.go:89] found id: ""
	I0729 19:46:21.496961 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.496972 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:21.496980 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:21.497043 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:21.532648 1120970 cri.go:89] found id: ""
	I0729 19:46:21.532682 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.532695 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:21.532703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:21.532777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:21.566507 1120970 cri.go:89] found id: ""
	I0729 19:46:21.566545 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.566556 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:21.566567 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:21.566652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:21.605591 1120970 cri.go:89] found id: ""
	I0729 19:46:21.605624 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.605635 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:21.605644 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:21.605711 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:21.639979 1120970 cri.go:89] found id: ""
	I0729 19:46:21.640004 1120970 logs.go:276] 0 containers: []
	W0729 19:46:21.640012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:21.640020 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:21.640035 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:21.694405 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:21.694450 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:21.708616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:21.708647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:21.778528 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:21.778567 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:21.778583 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:21.859626 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:21.859661 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:19.696385 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:22.195265 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:20.638331 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:23.138907 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:21.779848 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.278815 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:24.397520 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:24.412579 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:24.412673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:24.452586 1120970 cri.go:89] found id: ""
	I0729 19:46:24.452621 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.452633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:24.452640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:24.452856 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:24.487706 1120970 cri.go:89] found id: ""
	I0729 19:46:24.487739 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.487750 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:24.487758 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:24.487828 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:24.528798 1120970 cri.go:89] found id: ""
	I0729 19:46:24.528832 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.528844 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:24.528852 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:24.528926 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:24.566429 1120970 cri.go:89] found id: ""
	I0729 19:46:24.566464 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.566484 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:24.566497 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:24.566561 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:24.601216 1120970 cri.go:89] found id: ""
	I0729 19:46:24.601242 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.601249 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:24.601255 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:24.601318 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:24.635591 1120970 cri.go:89] found id: ""
	I0729 19:46:24.635636 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.635648 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:24.635655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:24.635723 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:24.670674 1120970 cri.go:89] found id: ""
	I0729 19:46:24.670705 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.670717 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:24.670724 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:24.670795 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:24.704820 1120970 cri.go:89] found id: ""
	I0729 19:46:24.704850 1120970 logs.go:276] 0 containers: []
	W0729 19:46:24.704861 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:24.704873 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:24.704889 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.787954 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:24.787989 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:24.849396 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:24.849433 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:24.900920 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:24.900956 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:24.915540 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:24.915572 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:24.986146 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.487069 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:27.500718 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:27.500802 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:27.535156 1120970 cri.go:89] found id: ""
	I0729 19:46:27.535188 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.535199 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:27.535206 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:27.535272 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:27.570613 1120970 cri.go:89] found id: ""
	I0729 19:46:27.570647 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.570658 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:27.570666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:27.570726 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:27.605503 1120970 cri.go:89] found id: ""
	I0729 19:46:27.605540 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.605552 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:27.605560 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:27.605628 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:27.638179 1120970 cri.go:89] found id: ""
	I0729 19:46:27.638202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.638209 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:27.638215 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:27.638265 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:27.671019 1120970 cri.go:89] found id: ""
	I0729 19:46:27.671049 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.671059 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:27.671067 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:27.671133 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:27.704126 1120970 cri.go:89] found id: ""
	I0729 19:46:27.704148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.704155 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:27.704161 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:27.704217 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:27.736106 1120970 cri.go:89] found id: ""
	I0729 19:46:27.736137 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.736148 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:27.736162 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:27.736234 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:27.775615 1120970 cri.go:89] found id: ""
	I0729 19:46:27.775644 1120970 logs.go:276] 0 containers: []
	W0729 19:46:27.775655 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:27.775666 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:27.775681 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:27.817852 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:27.817882 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:27.867280 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:27.867319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:27.880533 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:27.880558 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:27.952098 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:27.952120 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:27.952138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:24.195374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.696327 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:25.637615 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:28.138222 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:26.779021 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:29.279227 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.534052 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:30.560617 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:30.560704 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:30.594317 1120970 cri.go:89] found id: ""
	I0729 19:46:30.594354 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.594365 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:30.594372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:30.594438 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:30.629175 1120970 cri.go:89] found id: ""
	I0729 19:46:30.629202 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.629213 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:30.629278 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:30.629358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:30.663173 1120970 cri.go:89] found id: ""
	I0729 19:46:30.663199 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.663207 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:30.663212 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:30.663271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:30.695709 1120970 cri.go:89] found id: ""
	I0729 19:46:30.695729 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.695738 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:30.695745 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:30.695808 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:30.726555 1120970 cri.go:89] found id: ""
	I0729 19:46:30.726582 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.726589 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:30.726597 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:30.726658 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:30.759818 1120970 cri.go:89] found id: ""
	I0729 19:46:30.759847 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.759859 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:30.759865 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:30.759928 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:30.794006 1120970 cri.go:89] found id: ""
	I0729 19:46:30.794038 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.794051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:30.794058 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:30.794127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:30.825707 1120970 cri.go:89] found id: ""
	I0729 19:46:30.825735 1120970 logs.go:276] 0 containers: []
	W0729 19:46:30.825744 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:30.825753 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:30.825767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:30.877517 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:30.877553 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:30.890777 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:30.890811 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:30.956702 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:30.956732 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:30.956747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:31.039080 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:31.039118 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:29.195305 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.694814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.696603 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:30.638472 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.138085 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:31.279889 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.779333 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:33.580120 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:33.595087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:33.595152 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:33.636347 1120970 cri.go:89] found id: ""
	I0729 19:46:33.636374 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.636385 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:33.636392 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:33.636451 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:33.674180 1120970 cri.go:89] found id: ""
	I0729 19:46:33.674207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.674215 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:33.674222 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:33.674281 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:33.709549 1120970 cri.go:89] found id: ""
	I0729 19:46:33.709572 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.709581 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:33.709593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:33.709651 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:33.742803 1120970 cri.go:89] found id: ""
	I0729 19:46:33.742833 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.742854 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:33.742863 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:33.742931 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:33.776301 1120970 cri.go:89] found id: ""
	I0729 19:46:33.776329 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.776336 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:33.776342 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:33.776412 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:33.818972 1120970 cri.go:89] found id: ""
	I0729 19:46:33.819001 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.819009 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:33.819016 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:33.819084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:33.857970 1120970 cri.go:89] found id: ""
	I0729 19:46:33.858002 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.858022 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:33.858028 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:33.858113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:33.896207 1120970 cri.go:89] found id: ""
	I0729 19:46:33.896237 1120970 logs.go:276] 0 containers: []
	W0729 19:46:33.896248 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:33.896261 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:33.896276 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:33.976843 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:33.976879 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:34.015642 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:34.015671 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:34.066095 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:34.066133 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:34.079616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:34.079649 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:34.150666 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:36.651722 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:36.665599 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:36.665673 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:36.702807 1120970 cri.go:89] found id: ""
	I0729 19:46:36.702872 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.702897 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:36.702907 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:36.702978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:36.739552 1120970 cri.go:89] found id: ""
	I0729 19:46:36.739576 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.739585 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:36.739591 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:36.739643 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:36.774989 1120970 cri.go:89] found id: ""
	I0729 19:46:36.775017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.775028 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:36.775036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:36.775108 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:36.814984 1120970 cri.go:89] found id: ""
	I0729 19:46:36.815017 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.815034 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:36.815044 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:36.815113 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:36.848075 1120970 cri.go:89] found id: ""
	I0729 19:46:36.848116 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.848127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:36.848136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:36.848206 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:36.880504 1120970 cri.go:89] found id: ""
	I0729 19:46:36.880535 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.880544 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:36.880557 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:36.880615 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:36.914716 1120970 cri.go:89] found id: ""
	I0729 19:46:36.914744 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.914755 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:36.914763 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:36.914831 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:36.958975 1120970 cri.go:89] found id: ""
	I0729 19:46:36.959005 1120970 logs.go:276] 0 containers: []
	W0729 19:46:36.959016 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:36.959029 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:36.959046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:37.018208 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:37.018244 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:37.042496 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:37.042537 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:37.112833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:37.112861 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:37.112877 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:37.191572 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:37.191616 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:36.195356 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.694730 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:35.637513 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.137458 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:36.278153 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:38.778586 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:39.736044 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:39.749645 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:39.749719 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:39.786131 1120970 cri.go:89] found id: ""
	I0729 19:46:39.786155 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.786166 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:39.786174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:39.786237 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:39.820470 1120970 cri.go:89] found id: ""
	I0729 19:46:39.820499 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.820509 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:39.820516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:39.820583 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:39.854119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.854148 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.854157 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:39.854163 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:39.854218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:39.894676 1120970 cri.go:89] found id: ""
	I0729 19:46:39.894707 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.894714 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:39.894721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:39.894789 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:39.932651 1120970 cri.go:89] found id: ""
	I0729 19:46:39.932685 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.932697 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:39.932705 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:39.932776 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:39.968119 1120970 cri.go:89] found id: ""
	I0729 19:46:39.968153 1120970 logs.go:276] 0 containers: []
	W0729 19:46:39.968165 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:39.968174 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:39.968242 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:40.004137 1120970 cri.go:89] found id: ""
	I0729 19:46:40.004167 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.004175 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:40.004181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:40.004252 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:40.042519 1120970 cri.go:89] found id: ""
	I0729 19:46:40.042552 1120970 logs.go:276] 0 containers: []
	W0729 19:46:40.042563 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:40.042577 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:40.042601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:40.118691 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:40.118720 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:40.118733 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:40.198249 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:40.198279 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:40.236828 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:40.236861 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:40.290890 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:40.290920 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:42.804834 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:42.818516 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:42.818608 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:42.855519 1120970 cri.go:89] found id: ""
	I0729 19:46:42.855553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.855565 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:42.855573 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:42.855634 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:42.891795 1120970 cri.go:89] found id: ""
	I0729 19:46:42.891827 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.891837 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:42.891845 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:42.891912 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:42.925308 1120970 cri.go:89] found id: ""
	I0729 19:46:42.925341 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.925352 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:42.925359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:42.925428 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:42.961943 1120970 cri.go:89] found id: ""
	I0729 19:46:42.961968 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.961976 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:42.961984 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:42.962034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:41.194992 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.195814 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:40.138881 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.637095 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:44.637746 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:41.278451 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:43.279669 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:45.778954 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:42.994246 1120970 cri.go:89] found id: ""
	I0729 19:46:42.994276 1120970 logs.go:276] 0 containers: []
	W0729 19:46:42.994284 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:42.994290 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:42.994406 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:43.027914 1120970 cri.go:89] found id: ""
	I0729 19:46:43.027943 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.027953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:43.027962 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:43.028029 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:43.064274 1120970 cri.go:89] found id: ""
	I0729 19:46:43.064308 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.064319 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:43.064328 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:43.064402 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:43.104273 1120970 cri.go:89] found id: ""
	I0729 19:46:43.104303 1120970 logs.go:276] 0 containers: []
	W0729 19:46:43.104313 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:43.104324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:43.104342 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:43.175951 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:43.175978 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:43.175995 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:43.253386 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:43.253421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:43.293276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:43.293304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:43.345865 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:43.345896 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:45.861099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:45.875854 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:45.875925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:45.914780 1120970 cri.go:89] found id: ""
	I0729 19:46:45.914815 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.914827 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:45.914837 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:45.914925 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:45.952575 1120970 cri.go:89] found id: ""
	I0729 19:46:45.952607 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.952616 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:45.952622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:45.952676 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:45.993298 1120970 cri.go:89] found id: ""
	I0729 19:46:45.993331 1120970 logs.go:276] 0 containers: []
	W0729 19:46:45.993338 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:45.993344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:45.993400 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:46.033190 1120970 cri.go:89] found id: ""
	I0729 19:46:46.033216 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.033225 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:46.033230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:46.033283 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:46.068694 1120970 cri.go:89] found id: ""
	I0729 19:46:46.068728 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.068737 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:46.068743 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:46.068796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:46.101678 1120970 cri.go:89] found id: ""
	I0729 19:46:46.101716 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.101726 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:46.101733 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:46.101788 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:46.141669 1120970 cri.go:89] found id: ""
	I0729 19:46:46.141702 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.141713 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:46.141721 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:46.141780 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:46.173182 1120970 cri.go:89] found id: ""
	I0729 19:46:46.173213 1120970 logs.go:276] 0 containers: []
	W0729 19:46:46.173224 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:46.173235 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:46.173252 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:46.224615 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:46.224660 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:46.237889 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:46.237915 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:46.312446 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:46.312473 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:46.312489 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:46.389168 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:46.389206 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:45.196687 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:47.694428 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:46.638398 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.639437 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.277740 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:50.278638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:48.930620 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:48.944038 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:48.944101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:48.979672 1120970 cri.go:89] found id: ""
	I0729 19:46:48.979710 1120970 logs.go:276] 0 containers: []
	W0729 19:46:48.979722 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:48.979730 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:48.979804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:49.014931 1120970 cri.go:89] found id: ""
	I0729 19:46:49.014967 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.014980 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:49.015006 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:49.015078 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:49.050867 1120970 cri.go:89] found id: ""
	I0729 19:46:49.050903 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.050916 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:49.050924 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:49.050992 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:49.085479 1120970 cri.go:89] found id: ""
	I0729 19:46:49.085514 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.085521 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:49.085529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:49.085604 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:49.118570 1120970 cri.go:89] found id: ""
	I0729 19:46:49.118597 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.118605 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:49.118611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:49.118664 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:49.153581 1120970 cri.go:89] found id: ""
	I0729 19:46:49.153612 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.153624 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:49.153632 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:49.153702 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:49.187178 1120970 cri.go:89] found id: ""
	I0729 19:46:49.187207 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.187215 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:49.187221 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:49.187280 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:49.223132 1120970 cri.go:89] found id: ""
	I0729 19:46:49.223163 1120970 logs.go:276] 0 containers: []
	W0729 19:46:49.223173 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:49.223185 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:49.223200 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:49.274160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:49.274189 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.288399 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:49.288431 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:49.358452 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:49.358478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:49.358496 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:49.436711 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:49.436745 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:51.977377 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:51.991042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:51.991102 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:52.031425 1120970 cri.go:89] found id: ""
	I0729 19:46:52.031467 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.031477 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:52.031482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:52.031557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:52.069014 1120970 cri.go:89] found id: ""
	I0729 19:46:52.069045 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.069056 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:52.069064 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:52.069137 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:52.101974 1120970 cri.go:89] found id: ""
	I0729 19:46:52.102000 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.102008 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:52.102014 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:52.102079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:52.136232 1120970 cri.go:89] found id: ""
	I0729 19:46:52.136261 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.136271 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:52.136280 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:52.136344 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:52.173555 1120970 cri.go:89] found id: ""
	I0729 19:46:52.173585 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.173602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:52.173611 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:52.173675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:52.208764 1120970 cri.go:89] found id: ""
	I0729 19:46:52.208791 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.208799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:52.208805 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:52.208863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:52.241514 1120970 cri.go:89] found id: ""
	I0729 19:46:52.241541 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.241557 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:52.241564 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:52.241639 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:52.277726 1120970 cri.go:89] found id: ""
	I0729 19:46:52.277753 1120970 logs.go:276] 0 containers: []
	W0729 19:46:52.277764 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:52.277775 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:52.277789 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:52.344894 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:52.344916 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:52.344931 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:52.421492 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:52.421527 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:52.460896 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:52.460934 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:52.509921 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:52.509960 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:49.695616 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.696510 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:51.138012 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:53.138676 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:52.280019 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:54.778157 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.024046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:55.037609 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:55.037681 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:55.071059 1120970 cri.go:89] found id: ""
	I0729 19:46:55.071086 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.071094 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:55.071102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:55.071162 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:55.106634 1120970 cri.go:89] found id: ""
	I0729 19:46:55.106660 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.106669 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:55.106675 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:55.106737 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:55.138821 1120970 cri.go:89] found id: ""
	I0729 19:46:55.138858 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.138870 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:55.138878 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:55.138941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:55.173846 1120970 cri.go:89] found id: ""
	I0729 19:46:55.173893 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.173904 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:55.173913 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:55.173978 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:55.211853 1120970 cri.go:89] found id: ""
	I0729 19:46:55.211878 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.211885 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:55.211891 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:55.211941 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:55.245432 1120970 cri.go:89] found id: ""
	I0729 19:46:55.245470 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.245481 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:55.245489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:55.245557 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:55.286752 1120970 cri.go:89] found id: ""
	I0729 19:46:55.286777 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.286785 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:55.286791 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:55.286841 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:55.328070 1120970 cri.go:89] found id: ""
	I0729 19:46:55.328100 1120970 logs.go:276] 0 containers: []
	W0729 19:46:55.328119 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:55.328133 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:55.328151 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:55.341257 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:55.341285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:55.410966 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:55.410989 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:55.411008 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:46:55.486615 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:55.486653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:55.523615 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:55.523653 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:54.195887 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.703055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:55.138951 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:57.638887 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:56.778215 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:59.278483 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:46:58.074596 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:46:58.088302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:46:58.088396 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:46:58.124557 1120970 cri.go:89] found id: ""
	I0729 19:46:58.124589 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.124600 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:46:58.124608 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:46:58.124680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:46:58.160107 1120970 cri.go:89] found id: ""
	I0729 19:46:58.160142 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.160151 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:46:58.160157 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:46:58.160214 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:46:58.195522 1120970 cri.go:89] found id: ""
	I0729 19:46:58.195553 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.195564 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:46:58.195572 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:46:58.195637 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:46:58.232307 1120970 cri.go:89] found id: ""
	I0729 19:46:58.232338 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.232348 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:46:58.232355 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:46:58.232419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:46:58.271551 1120970 cri.go:89] found id: ""
	I0729 19:46:58.271602 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.271614 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:46:58.271622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:46:58.271701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:46:58.307833 1120970 cri.go:89] found id: ""
	I0729 19:46:58.307864 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.307875 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:46:58.307884 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:46:58.307951 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:46:58.341961 1120970 cri.go:89] found id: ""
	I0729 19:46:58.341989 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.341998 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:46:58.342004 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:46:58.342058 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:46:58.379923 1120970 cri.go:89] found id: ""
	I0729 19:46:58.379962 1120970 logs.go:276] 0 containers: []
	W0729 19:46:58.379972 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:46:58.379982 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:46:58.379997 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:58.423276 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:46:58.423310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:46:58.479021 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:46:58.479063 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:46:58.493544 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:46:58.493578 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:46:58.562634 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:46:58.562663 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:46:58.562684 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.145327 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:01.158997 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:01.159060 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:01.196272 1120970 cri.go:89] found id: ""
	I0729 19:47:01.196298 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.196306 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:01.196312 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:01.196364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:01.238138 1120970 cri.go:89] found id: ""
	I0729 19:47:01.238167 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.238177 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:01.238185 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:01.238249 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:01.276497 1120970 cri.go:89] found id: ""
	I0729 19:47:01.276525 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.276535 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:01.276543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:01.276607 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:01.309092 1120970 cri.go:89] found id: ""
	I0729 19:47:01.309121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.309130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:01.309135 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:01.309189 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:01.340172 1120970 cri.go:89] found id: ""
	I0729 19:47:01.340202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.340211 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:01.340217 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:01.340277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:01.377905 1120970 cri.go:89] found id: ""
	I0729 19:47:01.377941 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.377953 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:01.377961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:01.378034 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:01.414735 1120970 cri.go:89] found id: ""
	I0729 19:47:01.414767 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.414780 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:01.414789 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:01.414880 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:01.455743 1120970 cri.go:89] found id: ""
	I0729 19:47:01.455768 1120970 logs.go:276] 0 containers: []
	W0729 19:47:01.455776 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:01.455786 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:01.455799 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:01.507105 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:01.507141 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:01.520437 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:01.520465 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:01.590724 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:01.590746 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:01.590763 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:01.675343 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:01.675378 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:46:59.195744 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.695905 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:00.138760 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:02.139418 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.637243 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:01.278715 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:03.279321 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:05.778276 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:04.219800 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:04.234604 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:04.234684 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:04.267782 1120970 cri.go:89] found id: ""
	I0729 19:47:04.267810 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.267822 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:04.267830 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:04.267897 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:04.302373 1120970 cri.go:89] found id: ""
	I0729 19:47:04.302402 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.302413 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:04.302420 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:04.302484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:04.334998 1120970 cri.go:89] found id: ""
	I0729 19:47:04.335030 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.335041 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:04.335049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:04.335105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:04.370596 1120970 cri.go:89] found id: ""
	I0729 19:47:04.370624 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.370631 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:04.370638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:04.370695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:04.405912 1120970 cri.go:89] found id: ""
	I0729 19:47:04.405945 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.405957 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:04.405966 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:04.406044 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:04.439856 1120970 cri.go:89] found id: ""
	I0729 19:47:04.439881 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.439898 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:04.439905 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:04.439976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:04.473561 1120970 cri.go:89] found id: ""
	I0729 19:47:04.473587 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.473595 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:04.473601 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:04.473662 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:04.510181 1120970 cri.go:89] found id: ""
	I0729 19:47:04.510207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:04.510217 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:04.510226 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:04.510239 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.559448 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:04.559485 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:04.573752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:04.573782 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:04.641008 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:04.641030 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:04.641046 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:04.725252 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:04.725293 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.266379 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:07.280725 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:07.280801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:07.321856 1120970 cri.go:89] found id: ""
	I0729 19:47:07.321886 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.321894 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:07.321900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:07.321986 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:07.355102 1120970 cri.go:89] found id: ""
	I0729 19:47:07.355130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.355138 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:07.355144 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:07.355203 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:07.394720 1120970 cri.go:89] found id: ""
	I0729 19:47:07.394749 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.394762 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:07.394771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:07.394829 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:07.431002 1120970 cri.go:89] found id: ""
	I0729 19:47:07.431042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.431055 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:07.431063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:07.431132 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:07.467818 1120970 cri.go:89] found id: ""
	I0729 19:47:07.467855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.467864 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:07.467873 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:07.467942 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:07.504285 1120970 cri.go:89] found id: ""
	I0729 19:47:07.504316 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.504327 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:07.504335 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:07.504411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:07.538246 1120970 cri.go:89] found id: ""
	I0729 19:47:07.538276 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.538284 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:07.538291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:07.538351 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:07.573911 1120970 cri.go:89] found id: ""
	I0729 19:47:07.573939 1120970 logs.go:276] 0 containers: []
	W0729 19:47:07.573948 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:07.573957 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:07.573970 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:07.588083 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:07.588129 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:07.656169 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:07.656198 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:07.656216 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:07.740230 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:07.740264 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:07.780822 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:07.780856 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:04.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.695090 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:06.637479 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.638410 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:08.278522 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.782193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:10.336208 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:10.350233 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:10.350307 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:10.389155 1120970 cri.go:89] found id: ""
	I0729 19:47:10.389190 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.389202 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:10.389210 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:10.389277 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:10.421432 1120970 cri.go:89] found id: ""
	I0729 19:47:10.421466 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.421476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:10.421482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:10.421552 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:10.462530 1120970 cri.go:89] found id: ""
	I0729 19:47:10.462563 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.462572 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:10.462577 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:10.462640 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:10.499899 1120970 cri.go:89] found id: ""
	I0729 19:47:10.499927 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.499935 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:10.499945 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:10.500007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:10.534022 1120970 cri.go:89] found id: ""
	I0729 19:47:10.534051 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.534060 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:10.534066 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:10.534119 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:10.568136 1120970 cri.go:89] found id: ""
	I0729 19:47:10.568166 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.568174 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:10.568181 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:10.568246 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:10.603887 1120970 cri.go:89] found id: ""
	I0729 19:47:10.603919 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.603930 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:10.603938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:10.604005 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:10.639947 1120970 cri.go:89] found id: ""
	I0729 19:47:10.639974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:10.639981 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:10.639989 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:10.640001 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:10.693113 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:10.693146 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:10.708099 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:10.708138 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:10.777587 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:10.777618 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:10.777634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:10.872453 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:10.872499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:09.195301 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.695021 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.697025 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:11.137420 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.137553 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.278601 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.779974 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:13.412398 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:13.426246 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:13.426308 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:13.463170 1120970 cri.go:89] found id: ""
	I0729 19:47:13.463202 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.463213 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:13.463220 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:13.463287 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:13.499102 1120970 cri.go:89] found id: ""
	I0729 19:47:13.499137 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.499146 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:13.499151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:13.499235 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:13.531462 1120970 cri.go:89] found id: ""
	I0729 19:47:13.531514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.531526 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:13.531534 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:13.531606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:13.564632 1120970 cri.go:89] found id: ""
	I0729 19:47:13.564670 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.564681 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:13.564689 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:13.564745 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:13.596564 1120970 cri.go:89] found id: ""
	I0729 19:47:13.596591 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.596602 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:13.596610 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:13.596686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:13.629682 1120970 cri.go:89] found id: ""
	I0729 19:47:13.629711 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.629721 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:13.629729 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:13.629791 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:13.664666 1120970 cri.go:89] found id: ""
	I0729 19:47:13.664693 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.664701 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:13.664708 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:13.664777 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:13.699238 1120970 cri.go:89] found id: ""
	I0729 19:47:13.699267 1120970 logs.go:276] 0 containers: []
	W0729 19:47:13.699277 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:13.699289 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:13.699304 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:13.751555 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:13.751588 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:13.766769 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:13.766801 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:13.834898 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:13.834918 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:13.834932 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:13.913907 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:13.913944 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.457229 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:16.470138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:16.470222 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:16.504643 1120970 cri.go:89] found id: ""
	I0729 19:47:16.504679 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.504688 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:16.504693 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:16.504763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:16.539328 1120970 cri.go:89] found id: ""
	I0729 19:47:16.539368 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.539379 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:16.539385 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:16.539446 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:16.597867 1120970 cri.go:89] found id: ""
	I0729 19:47:16.597893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.597904 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:16.597911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:16.597976 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:16.631728 1120970 cri.go:89] found id: ""
	I0729 19:47:16.631755 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.631768 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:16.631780 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:16.631842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:16.668337 1120970 cri.go:89] found id: ""
	I0729 19:47:16.668377 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.668387 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:16.668395 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:16.668461 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:16.704808 1120970 cri.go:89] found id: ""
	I0729 19:47:16.704834 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.704844 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:16.704851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:16.704911 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:16.743919 1120970 cri.go:89] found id: ""
	I0729 19:47:16.743948 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.743955 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:16.743961 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:16.744022 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:16.785240 1120970 cri.go:89] found id: ""
	I0729 19:47:16.785268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:16.785279 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:16.785290 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:16.785306 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:16.838247 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:16.838288 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:16.851766 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:16.851797 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:16.928960 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:16.928986 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:16.929002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:17.008260 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:17.008296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:16.194957 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:18.196333 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:15.138916 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.637392 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.638484 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:17.781105 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:20.279439 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:19.555108 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:19.569838 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:19.569917 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:19.608358 1120970 cri.go:89] found id: ""
	I0729 19:47:19.608393 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.608405 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:19.608414 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:19.608475 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:19.644144 1120970 cri.go:89] found id: ""
	I0729 19:47:19.644173 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.644183 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:19.644191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:19.644259 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:19.686316 1120970 cri.go:89] found id: ""
	I0729 19:47:19.686342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.686353 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:19.686359 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:19.686419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:19.722006 1120970 cri.go:89] found id: ""
	I0729 19:47:19.722034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.722044 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:19.722052 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:19.722127 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:19.762767 1120970 cri.go:89] found id: ""
	I0729 19:47:19.762799 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.762811 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:19.762818 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:19.762904 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:19.802185 1120970 cri.go:89] found id: ""
	I0729 19:47:19.802217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.802228 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:19.802238 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:19.802311 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:19.840001 1120970 cri.go:89] found id: ""
	I0729 19:47:19.840036 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.840048 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:19.840056 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:19.840117 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:19.877627 1120970 cri.go:89] found id: ""
	I0729 19:47:19.877657 1120970 logs.go:276] 0 containers: []
	W0729 19:47:19.877668 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:19.877681 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:19.877698 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:19.920673 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:19.920708 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:19.980004 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:19.980045 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:19.994679 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:19.994714 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:20.064864 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:20.064892 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:20.064910 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:22.650763 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:22.664998 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:22.665079 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:22.701576 1120970 cri.go:89] found id: ""
	I0729 19:47:22.701611 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.701620 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:22.701630 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:22.701689 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:22.744238 1120970 cri.go:89] found id: ""
	I0729 19:47:22.744268 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.744275 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:22.744287 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:22.744358 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:22.785947 1120970 cri.go:89] found id: ""
	I0729 19:47:22.785974 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.785982 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:22.785988 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:22.786047 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:22.823352 1120970 cri.go:89] found id: ""
	I0729 19:47:22.823379 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.823387 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:22.823394 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:22.823462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:22.855676 1120970 cri.go:89] found id: ""
	I0729 19:47:22.855704 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.855710 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:22.855716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:22.855773 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:22.891910 1120970 cri.go:89] found id: ""
	I0729 19:47:22.891943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.891956 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:22.891964 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:22.892025 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:22.928605 1120970 cri.go:89] found id: ""
	I0729 19:47:22.928638 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.928648 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:22.928658 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:22.928728 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:20.196567 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.694908 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.137177 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:24.137629 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.778638 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:25.279261 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:22.985022 1120970 cri.go:89] found id: ""
	I0729 19:47:22.985059 1120970 logs.go:276] 0 containers: []
	W0729 19:47:22.985068 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:22.985088 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:22.985101 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:23.073062 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:23.073098 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:23.114995 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:23.115024 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:23.171536 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:23.171570 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:23.185192 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:23.185219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:23.259355 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:25.760046 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:25.774159 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:25.774245 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:25.808374 1120970 cri.go:89] found id: ""
	I0729 19:47:25.808406 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.808417 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:25.808424 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:25.808486 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:25.843623 1120970 cri.go:89] found id: ""
	I0729 19:47:25.843655 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.843666 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:25.843673 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:25.843774 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:25.880200 1120970 cri.go:89] found id: ""
	I0729 19:47:25.880233 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.880243 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:25.880250 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:25.880323 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:25.915349 1120970 cri.go:89] found id: ""
	I0729 19:47:25.915374 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.915381 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:25.915391 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:25.915444 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:25.948092 1120970 cri.go:89] found id: ""
	I0729 19:47:25.948134 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.948145 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:25.948153 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:25.948220 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:25.981836 1120970 cri.go:89] found id: ""
	I0729 19:47:25.981864 1120970 logs.go:276] 0 containers: []
	W0729 19:47:25.981874 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:25.981882 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:25.981967 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:26.014464 1120970 cri.go:89] found id: ""
	I0729 19:47:26.014494 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.014502 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:26.014515 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:26.014580 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:26.048607 1120970 cri.go:89] found id: ""
	I0729 19:47:26.048635 1120970 logs.go:276] 0 containers: []
	W0729 19:47:26.048646 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:26.048667 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:26.048683 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:26.100962 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:26.101002 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:26.116404 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:26.116434 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:26.183714 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:26.183734 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:26.183747 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:26.260308 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:26.260347 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:24.695393 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.195561 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:26.137714 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.637781 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:27.778603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.278476 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:28.802593 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:28.815317 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:28.815380 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:28.849448 1120970 cri.go:89] found id: ""
	I0729 19:47:28.849473 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.849480 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:28.849486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:28.849544 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:28.888305 1120970 cri.go:89] found id: ""
	I0729 19:47:28.888342 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.888353 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:28.888360 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:28.888421 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:28.921000 1120970 cri.go:89] found id: ""
	I0729 19:47:28.921034 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.921045 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:28.921054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:28.921116 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:28.953546 1120970 cri.go:89] found id: ""
	I0729 19:47:28.953574 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.953583 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:28.953589 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:28.953652 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:28.991203 1120970 cri.go:89] found id: ""
	I0729 19:47:28.991236 1120970 logs.go:276] 0 containers: []
	W0729 19:47:28.991248 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:28.991256 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:28.991329 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:29.026151 1120970 cri.go:89] found id: ""
	I0729 19:47:29.026183 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.026195 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:29.026203 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:29.026271 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:29.059654 1120970 cri.go:89] found id: ""
	I0729 19:47:29.059687 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.059695 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:29.059702 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:29.059756 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:29.091952 1120970 cri.go:89] found id: ""
	I0729 19:47:29.092001 1120970 logs.go:276] 0 containers: []
	W0729 19:47:29.092012 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:29.092024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:29.092043 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:29.143511 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:29.143543 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:29.157752 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:29.157781 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:29.225599 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:29.225621 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:29.225634 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:29.311329 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:29.311370 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:31.850921 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:31.864594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:31.864675 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:31.898580 1120970 cri.go:89] found id: ""
	I0729 19:47:31.898622 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.898631 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:31.898638 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:31.898693 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:31.932481 1120970 cri.go:89] found id: ""
	I0729 19:47:31.932514 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.932525 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:31.932533 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:31.932595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:31.964820 1120970 cri.go:89] found id: ""
	I0729 19:47:31.964857 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.964868 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:31.964876 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:31.964957 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:31.996854 1120970 cri.go:89] found id: ""
	I0729 19:47:31.996889 1120970 logs.go:276] 0 containers: []
	W0729 19:47:31.996900 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:31.996908 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:31.996975 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:32.031808 1120970 cri.go:89] found id: ""
	I0729 19:47:32.031843 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.031854 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:32.031864 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:32.031934 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:32.064563 1120970 cri.go:89] found id: ""
	I0729 19:47:32.064593 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.064608 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:32.064615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:32.064677 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:32.102811 1120970 cri.go:89] found id: ""
	I0729 19:47:32.102859 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.102871 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:32.102879 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:32.102952 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:32.136770 1120970 cri.go:89] found id: ""
	I0729 19:47:32.136798 1120970 logs.go:276] 0 containers: []
	W0729 19:47:32.136808 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:32.136819 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:32.136838 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:32.189334 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:32.189371 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:32.204039 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:32.204076 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:32.274139 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:32.274172 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:32.274187 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:32.350191 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:32.350228 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:29.196922 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:31.200725 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:33.695374 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:30.637898 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.638342 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.639225 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:32.279116 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.780505 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:34.889718 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:34.903796 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:34.903877 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:34.938860 1120970 cri.go:89] found id: ""
	I0729 19:47:34.938893 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.938904 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:34.938912 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:34.938980 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:34.970501 1120970 cri.go:89] found id: ""
	I0729 19:47:34.970544 1120970 logs.go:276] 0 containers: []
	W0729 19:47:34.970553 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:34.970559 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:34.970626 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:35.006915 1120970 cri.go:89] found id: ""
	I0729 19:47:35.006943 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.006950 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:35.006957 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:35.007020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:35.040827 1120970 cri.go:89] found id: ""
	I0729 19:47:35.040855 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.040862 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:35.040869 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:35.040918 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:35.075497 1120970 cri.go:89] found id: ""
	I0729 19:47:35.075527 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.075537 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:35.075544 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:35.075598 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:35.111265 1120970 cri.go:89] found id: ""
	I0729 19:47:35.111293 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.111302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:35.111308 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:35.111363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:35.145728 1120970 cri.go:89] found id: ""
	I0729 19:47:35.145756 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.145763 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:35.145769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:35.145821 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:35.185050 1120970 cri.go:89] found id: ""
	I0729 19:47:35.185078 1120970 logs.go:276] 0 containers: []
	W0729 19:47:35.185088 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:35.185100 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:35.185117 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:35.236835 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:35.236867 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:35.251263 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:35.251290 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:35.325888 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:35.325912 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:35.325925 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:35.404779 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:35.404819 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:37.944941 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:37.960885 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:37.960954 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:35.695786 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.696015 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.136815 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.137763 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:37.278790 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:39.779285 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:38.007612 1120970 cri.go:89] found id: ""
	I0729 19:47:38.007639 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.007648 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:38.007655 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:38.007721 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:38.044568 1120970 cri.go:89] found id: ""
	I0729 19:47:38.044610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.044621 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:38.044628 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:38.044698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:38.085186 1120970 cri.go:89] found id: ""
	I0729 19:47:38.085217 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.085227 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:38.085235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:38.085303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:38.123039 1120970 cri.go:89] found id: ""
	I0729 19:47:38.123070 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.123082 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:38.123090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:38.123158 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:38.166191 1120970 cri.go:89] found id: ""
	I0729 19:47:38.166220 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.166229 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:38.166237 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:38.166301 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:38.204138 1120970 cri.go:89] found id: ""
	I0729 19:47:38.204170 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.204179 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:38.204186 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:38.204286 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:38.241599 1120970 cri.go:89] found id: ""
	I0729 19:47:38.241629 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.241638 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:38.241643 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:38.241695 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:38.276986 1120970 cri.go:89] found id: ""
	I0729 19:47:38.277013 1120970 logs.go:276] 0 containers: []
	W0729 19:47:38.277021 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:38.277030 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:38.277042 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:38.330925 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:38.330971 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:38.345416 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:38.345455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:38.420010 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:38.420041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:38.420059 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:38.506198 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:38.506243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:41.048957 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:41.062950 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:41.063027 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:41.108956 1120970 cri.go:89] found id: ""
	I0729 19:47:41.108987 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.108995 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:41.109002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:41.109068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:41.146952 1120970 cri.go:89] found id: ""
	I0729 19:47:41.146984 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.146994 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:41.147002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:41.147068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:41.190277 1120970 cri.go:89] found id: ""
	I0729 19:47:41.190310 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.190321 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:41.190329 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:41.190410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:41.226733 1120970 cri.go:89] found id: ""
	I0729 19:47:41.226762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.226770 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:41.226777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:41.226862 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:41.260761 1120970 cri.go:89] found id: ""
	I0729 19:47:41.260790 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.260798 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:41.260804 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:41.260871 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:41.296325 1120970 cri.go:89] found id: ""
	I0729 19:47:41.296356 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.296367 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:41.296376 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:41.296435 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:41.329613 1120970 cri.go:89] found id: ""
	I0729 19:47:41.329642 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.329651 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:41.329657 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:41.329717 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:41.365182 1120970 cri.go:89] found id: ""
	I0729 19:47:41.365212 1120970 logs.go:276] 0 containers: []
	W0729 19:47:41.365220 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:41.365229 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:41.365243 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:41.416107 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:41.416143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:41.429529 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:41.429562 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:41.499546 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:41.499568 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:41.499582 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:41.582010 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:41.582049 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:40.195271 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.698072 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:41.142911 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:43.637826 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:42.278481 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.278595 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:44.122162 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:44.136767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:44.136850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:44.171574 1120970 cri.go:89] found id: ""
	I0729 19:47:44.171610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.171621 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:44.171629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:44.171699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:44.206974 1120970 cri.go:89] found id: ""
	I0729 19:47:44.207004 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.207013 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:44.207019 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:44.207068 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:44.240412 1120970 cri.go:89] found id: ""
	I0729 19:47:44.240438 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.240449 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:44.240457 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:44.240521 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:44.274434 1120970 cri.go:89] found id: ""
	I0729 19:47:44.274464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.274475 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:44.274482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:44.274553 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:44.313302 1120970 cri.go:89] found id: ""
	I0729 19:47:44.313330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.313339 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:44.313354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:44.313426 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:44.344853 1120970 cri.go:89] found id: ""
	I0729 19:47:44.344885 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.344895 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:44.344903 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:44.344970 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:44.378055 1120970 cri.go:89] found id: ""
	I0729 19:47:44.378089 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.378101 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:44.378109 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:44.378176 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:44.412734 1120970 cri.go:89] found id: ""
	I0729 19:47:44.412762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:44.412772 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:44.412782 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:44.412795 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:44.468125 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:44.468157 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:44.482896 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:44.482923 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:44.551222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:44.551249 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:44.551270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:44.630413 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:44.630455 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:47.172322 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:47.186383 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:47.186463 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:47.221577 1120970 cri.go:89] found id: ""
	I0729 19:47:47.221610 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.221617 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:47.221623 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:47.221686 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:47.260164 1120970 cri.go:89] found id: ""
	I0729 19:47:47.260207 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.260227 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:47.260235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:47.260303 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:47.297101 1120970 cri.go:89] found id: ""
	I0729 19:47:47.297130 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.297139 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:47.297148 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:47.297211 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:47.332429 1120970 cri.go:89] found id: ""
	I0729 19:47:47.332464 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.332474 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:47.332484 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:47.332538 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:47.366021 1120970 cri.go:89] found id: ""
	I0729 19:47:47.366055 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.366065 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:47.366074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:47.366144 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:47.401278 1120970 cri.go:89] found id: ""
	I0729 19:47:47.401307 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.401315 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:47.401321 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:47.401395 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:47.435717 1120970 cri.go:89] found id: ""
	I0729 19:47:47.435748 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.435756 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:47.435770 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:47.435835 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:47.472120 1120970 cri.go:89] found id: ""
	I0729 19:47:47.472149 1120970 logs.go:276] 0 containers: []
	W0729 19:47:47.472157 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:47.472167 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:47.472181 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:47.529466 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:47.529503 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:47.544072 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:47.544102 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:47.614456 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:47.614478 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:47.614499 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:47.693271 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:47.693305 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:45.195129 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.196302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:45.638102 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:47.639278 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:46.778610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:48.778746 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.232417 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:50.246080 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:50.246154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:50.285256 1120970 cri.go:89] found id: ""
	I0729 19:47:50.285284 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.285294 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:50.285302 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:50.285364 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:50.319443 1120970 cri.go:89] found id: ""
	I0729 19:47:50.319469 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.319476 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:50.319482 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:50.319555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:50.356465 1120970 cri.go:89] found id: ""
	I0729 19:47:50.356495 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.356505 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:50.356512 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:50.356578 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:50.393920 1120970 cri.go:89] found id: ""
	I0729 19:47:50.393954 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.393965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:50.393973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:50.394052 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:50.430287 1120970 cri.go:89] found id: ""
	I0729 19:47:50.430320 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.430333 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:50.430341 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:50.430411 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:50.465501 1120970 cri.go:89] found id: ""
	I0729 19:47:50.465528 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.465536 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:50.465542 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:50.465595 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:50.504012 1120970 cri.go:89] found id: ""
	I0729 19:47:50.504042 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.504051 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:50.504063 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:50.504122 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:50.545117 1120970 cri.go:89] found id: ""
	I0729 19:47:50.545151 1120970 logs.go:276] 0 containers: []
	W0729 19:47:50.545163 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:50.545175 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:50.545198 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:50.618183 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:50.618213 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:50.618232 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:50.697577 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:50.697611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:50.745910 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:50.745949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:50.797458 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:50.797501 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:49.694395 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.697714 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:50.138539 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:52.143316 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:54.637975 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:51.279127 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.779610 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:53.311907 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:53.326666 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:53.326734 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:53.361564 1120970 cri.go:89] found id: ""
	I0729 19:47:53.361596 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.361614 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:53.361621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:53.361685 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:53.397867 1120970 cri.go:89] found id: ""
	I0729 19:47:53.397899 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.397910 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:53.397918 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:53.398023 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:53.438721 1120970 cri.go:89] found id: ""
	I0729 19:47:53.438752 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.438764 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:53.438771 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:53.438840 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:53.477746 1120970 cri.go:89] found id: ""
	I0729 19:47:53.477776 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.477787 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:53.477794 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:53.477863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:53.510899 1120970 cri.go:89] found id: ""
	I0729 19:47:53.510928 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.510936 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:53.510941 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:53.510994 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:53.545749 1120970 cri.go:89] found id: ""
	I0729 19:47:53.545786 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.545799 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:53.545807 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:53.545883 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:53.585542 1120970 cri.go:89] found id: ""
	I0729 19:47:53.585575 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.585586 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:53.585593 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:53.585666 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:53.617974 1120970 cri.go:89] found id: ""
	I0729 19:47:53.618006 1120970 logs.go:276] 0 containers: []
	W0729 19:47:53.618014 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:53.618024 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:53.618036 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:53.670860 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:53.670897 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:53.685089 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:53.685120 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:53.760570 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:53.760598 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:53.760611 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:53.848973 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:53.849018 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.394206 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:56.409087 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:56.409167 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:56.447553 1120970 cri.go:89] found id: ""
	I0729 19:47:56.447589 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.447607 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:56.447615 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:56.447694 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:56.485948 1120970 cri.go:89] found id: ""
	I0729 19:47:56.485978 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.485986 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:56.485992 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:56.486061 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:56.521722 1120970 cri.go:89] found id: ""
	I0729 19:47:56.521762 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.521784 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:56.521792 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:56.521855 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:56.557379 1120970 cri.go:89] found id: ""
	I0729 19:47:56.557414 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.557425 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:56.557433 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:56.557488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:56.595198 1120970 cri.go:89] found id: ""
	I0729 19:47:56.595225 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.595233 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:56.595240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:56.595306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:56.629298 1120970 cri.go:89] found id: ""
	I0729 19:47:56.629330 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.629337 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:56.629344 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:56.629410 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:56.663401 1120970 cri.go:89] found id: ""
	I0729 19:47:56.663434 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.663445 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:56.663453 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:56.663519 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:56.699622 1120970 cri.go:89] found id: ""
	I0729 19:47:56.699651 1120970 logs.go:276] 0 containers: []
	W0729 19:47:56.699661 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:56.699672 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:47:56.699688 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:47:56.739680 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:56.739713 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:56.794605 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:56.794647 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:56.824479 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:56.824510 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:56.889186 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:56.889209 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:56.889224 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:47:54.196350 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.696572 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:57.137366 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.638403 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:56.278603 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:58.280193 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:00.778204 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:47:59.472943 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:47:59.488574 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:47:59.488657 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:47:59.528870 1120970 cri.go:89] found id: ""
	I0729 19:47:59.528910 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.528921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:47:59.528930 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:47:59.529001 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:47:59.565299 1120970 cri.go:89] found id: ""
	I0729 19:47:59.565331 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.565343 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:47:59.565351 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:47:59.565419 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:47:59.604951 1120970 cri.go:89] found id: ""
	I0729 19:47:59.604985 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.604996 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:47:59.605005 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:47:59.605076 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:47:59.639094 1120970 cri.go:89] found id: ""
	I0729 19:47:59.639121 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.639130 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:47:59.639138 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:47:59.639205 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:47:59.674360 1120970 cri.go:89] found id: ""
	I0729 19:47:59.674392 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.674401 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:47:59.674407 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:47:59.674462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:47:59.712926 1120970 cri.go:89] found id: ""
	I0729 19:47:59.712950 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.712959 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:47:59.712965 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:47:59.713026 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:47:59.750493 1120970 cri.go:89] found id: ""
	I0729 19:47:59.750524 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.750532 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:47:59.750539 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:47:59.750603 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:47:59.790635 1120970 cri.go:89] found id: ""
	I0729 19:47:59.790663 1120970 logs.go:276] 0 containers: []
	W0729 19:47:59.790672 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:47:59.790687 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:47:59.790703 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:47:59.844160 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:47:59.844194 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:47:59.858123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:47:59.858152 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:47:59.931561 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:47:59.931592 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:47:59.931609 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:00.014902 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:00.014947 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:02.555856 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:02.572781 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:02.572852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:02.611005 1120970 cri.go:89] found id: ""
	I0729 19:48:02.611033 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.611043 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:02.611049 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:02.611101 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:02.652844 1120970 cri.go:89] found id: ""
	I0729 19:48:02.652870 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.652876 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:02.652883 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:02.652937 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:02.694690 1120970 cri.go:89] found id: ""
	I0729 19:48:02.694719 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.694729 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:02.694738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:02.694799 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:02.729527 1120970 cri.go:89] found id: ""
	I0729 19:48:02.729558 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.729569 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:02.729576 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:02.729649 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:02.763460 1120970 cri.go:89] found id: ""
	I0729 19:48:02.763488 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.763497 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:02.763503 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:02.763556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:02.798268 1120970 cri.go:89] found id: ""
	I0729 19:48:02.798294 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.798302 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:02.798309 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:02.798360 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:02.837540 1120970 cri.go:89] found id: ""
	I0729 19:48:02.837579 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.837591 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:02.837605 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:02.837672 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:02.873574 1120970 cri.go:89] found id: ""
	I0729 19:48:02.873612 1120970 logs.go:276] 0 containers: []
	W0729 19:48:02.873624 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:02.873646 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:02.873663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:02.926260 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:02.926296 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:02.940593 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:02.940618 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:47:59.195148 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:01.195230 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:03.196163 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.139034 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.637691 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:02.778540 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:04.781529 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	W0729 19:48:03.015778 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:03.015800 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:03.015818 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:03.099824 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:03.099859 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.639291 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:05.652370 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:05.652431 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:05.686594 1120970 cri.go:89] found id: ""
	I0729 19:48:05.686624 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.686633 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:05.686640 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:05.686701 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:05.722162 1120970 cri.go:89] found id: ""
	I0729 19:48:05.722192 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.722209 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:05.722216 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:05.722284 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:05.754309 1120970 cri.go:89] found id: ""
	I0729 19:48:05.754338 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.754349 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:05.754357 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:05.754449 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:05.786934 1120970 cri.go:89] found id: ""
	I0729 19:48:05.786962 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.786968 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:05.786974 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:05.787032 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:05.821454 1120970 cri.go:89] found id: ""
	I0729 19:48:05.821487 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.821498 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:05.821506 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:05.821575 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:05.855436 1120970 cri.go:89] found id: ""
	I0729 19:48:05.855467 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.855478 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:05.855486 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:05.855551 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:05.887414 1120970 cri.go:89] found id: ""
	I0729 19:48:05.887447 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.887466 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:05.887477 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:05.887549 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:05.924173 1120970 cri.go:89] found id: ""
	I0729 19:48:05.924200 1120970 logs.go:276] 0 containers: []
	W0729 19:48:05.924208 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:05.924218 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:05.924231 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:05.977839 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:05.977872 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:05.991324 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:05.991359 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:06.065904 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:06.065931 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:06.065949 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:06.149225 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:06.149258 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:05.196530 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.695302 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:06.640464 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.137577 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:07.277286 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:09.278994 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:08.689901 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:08.705008 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:08.705073 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:08.746191 1120970 cri.go:89] found id: ""
	I0729 19:48:08.746222 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.746232 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:08.746240 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:08.746306 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:08.792092 1120970 cri.go:89] found id: ""
	I0729 19:48:08.792120 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.792130 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:08.792137 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:08.792196 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:08.831535 1120970 cri.go:89] found id: ""
	I0729 19:48:08.831567 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.831577 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:08.831585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:08.831650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:08.871544 1120970 cri.go:89] found id: ""
	I0729 19:48:08.871576 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.871587 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:08.871594 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:08.871661 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:08.909562 1120970 cri.go:89] found id: ""
	I0729 19:48:08.909594 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.909611 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:08.909621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:08.909698 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:08.953074 1120970 cri.go:89] found id: ""
	I0729 19:48:08.953109 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.953121 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:08.953130 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:08.953202 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:08.992361 1120970 cri.go:89] found id: ""
	I0729 19:48:08.992400 1120970 logs.go:276] 0 containers: []
	W0729 19:48:08.992412 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:08.992421 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:08.992488 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:09.046065 1120970 cri.go:89] found id: ""
	I0729 19:48:09.046093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:09.046101 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:09.046113 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:09.046134 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:09.103453 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:09.103494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:09.117220 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:09.117245 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:09.188222 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:09.188252 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:09.188270 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:09.271640 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:09.271677 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:11.812430 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:11.827291 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:11.827387 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:11.865062 1120970 cri.go:89] found id: ""
	I0729 19:48:11.865099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.865111 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:11.865120 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:11.865212 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:11.899431 1120970 cri.go:89] found id: ""
	I0729 19:48:11.899465 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.899475 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:11.899483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:11.899547 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:11.933796 1120970 cri.go:89] found id: ""
	I0729 19:48:11.933831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.933843 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:11.933851 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:11.933920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:11.976911 1120970 cri.go:89] found id: ""
	I0729 19:48:11.976941 1120970 logs.go:276] 0 containers: []
	W0729 19:48:11.976951 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:11.976958 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:11.977020 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:12.012692 1120970 cri.go:89] found id: ""
	I0729 19:48:12.012723 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.012732 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:12.012738 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:12.012801 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:12.049648 1120970 cri.go:89] found id: ""
	I0729 19:48:12.049684 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.049695 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:12.049704 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:12.049771 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:12.093629 1120970 cri.go:89] found id: ""
	I0729 19:48:12.093662 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.093673 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:12.093682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:12.093752 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:12.130835 1120970 cri.go:89] found id: ""
	I0729 19:48:12.130887 1120970 logs.go:276] 0 containers: []
	W0729 19:48:12.130899 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:12.130912 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:12.130930 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:12.168464 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:12.168494 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:12.224722 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:12.224767 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:12.238454 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:12.238491 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:12.309122 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:12.309156 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:12.309171 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:10.195555 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:12.196093 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.638217 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.137267 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:11.778922 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:13.779268 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:14.892160 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:14.906036 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:14.906105 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.939106 1120970 cri.go:89] found id: ""
	I0729 19:48:14.939136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.939144 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:14.939151 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:14.939218 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:14.973776 1120970 cri.go:89] found id: ""
	I0729 19:48:14.973806 1120970 logs.go:276] 0 containers: []
	W0729 19:48:14.973817 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:14.973825 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:14.973887 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:15.004448 1120970 cri.go:89] found id: ""
	I0729 19:48:15.004475 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.004483 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:15.004489 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:15.004556 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:15.038066 1120970 cri.go:89] found id: ""
	I0729 19:48:15.038093 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.038101 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:15.038110 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:15.038174 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:15.070539 1120970 cri.go:89] found id: ""
	I0729 19:48:15.070568 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.070577 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:15.070585 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:15.070646 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:15.103880 1120970 cri.go:89] found id: ""
	I0729 19:48:15.103922 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.103934 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:15.103943 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:15.104013 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:15.140762 1120970 cri.go:89] found id: ""
	I0729 19:48:15.140785 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.140792 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:15.140798 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:15.140850 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:15.174376 1120970 cri.go:89] found id: ""
	I0729 19:48:15.174411 1120970 logs.go:276] 0 containers: []
	W0729 19:48:15.174422 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:15.174434 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:15.174457 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:15.231283 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:15.231319 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:15.245103 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:15.245131 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:15.317664 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:15.317685 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:15.317701 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:15.404545 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:15.404600 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:17.949406 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:17.963001 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:17.963084 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:14.697767 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:17.194300 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.137773 1120280 pod_ready.go:102] pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:16.632390 1120280 pod_ready.go:81] duration metric: took 4m0.001130574s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:16.632416 1120280 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jsvnd" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:16.632439 1120280 pod_ready.go:38] duration metric: took 4m10.712020611s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:16.632469 1120280 kubeadm.go:597] duration metric: took 4m18.568642855s to restartPrimaryControlPlane
	W0729 19:48:16.632566 1120280 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:16.632597 1120280 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:16.279567 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.280676 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:20.779399 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:18.003227 1120970 cri.go:89] found id: ""
	I0729 19:48:18.003263 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.003274 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:18.003284 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:18.003363 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:18.037680 1120970 cri.go:89] found id: ""
	I0729 19:48:18.037716 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.037727 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:18.037736 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:18.037804 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:18.081360 1120970 cri.go:89] found id: ""
	I0729 19:48:18.081393 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.081403 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:18.081412 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:18.081479 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:18.115582 1120970 cri.go:89] found id: ""
	I0729 19:48:18.115619 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.115630 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:18.115639 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:18.115708 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:18.159771 1120970 cri.go:89] found id: ""
	I0729 19:48:18.159807 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.159818 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:18.159826 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:18.159899 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:18.206073 1120970 cri.go:89] found id: ""
	I0729 19:48:18.206100 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.206107 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:18.206113 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:18.206173 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:18.241841 1120970 cri.go:89] found id: ""
	I0729 19:48:18.241880 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.241892 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:18.241900 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:18.241969 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:18.280068 1120970 cri.go:89] found id: ""
	I0729 19:48:18.280099 1120970 logs.go:276] 0 containers: []
	W0729 19:48:18.280110 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:18.280123 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:18.280143 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:18.360236 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:18.360268 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:18.360285 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:18.447648 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:18.447693 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:18.489625 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:18.489663 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:18.543428 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:18.543476 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.058220 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:21.073079 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:21.073168 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:21.111334 1120970 cri.go:89] found id: ""
	I0729 19:48:21.111377 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.111389 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:21.111398 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:21.111462 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:21.144757 1120970 cri.go:89] found id: ""
	I0729 19:48:21.144788 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.144798 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:21.144806 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:21.144872 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:21.178887 1120970 cri.go:89] found id: ""
	I0729 19:48:21.178919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.178927 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:21.178934 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:21.179000 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:21.216561 1120970 cri.go:89] found id: ""
	I0729 19:48:21.216589 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.216605 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:21.216612 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:21.216679 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:21.252564 1120970 cri.go:89] found id: ""
	I0729 19:48:21.252601 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.252612 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:21.252621 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:21.252692 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:21.287372 1120970 cri.go:89] found id: ""
	I0729 19:48:21.287399 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.287410 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:21.287418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:21.287482 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:21.325121 1120970 cri.go:89] found id: ""
	I0729 19:48:21.325159 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.325169 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:21.325177 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:21.325248 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:21.359113 1120970 cri.go:89] found id: ""
	I0729 19:48:21.359145 1120970 logs.go:276] 0 containers: []
	W0729 19:48:21.359156 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:21.359169 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:21.359185 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:21.416196 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:21.416233 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:21.430635 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:21.430668 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:21.498436 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:21.498461 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:21.498478 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:21.578602 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:21.578643 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:19.195857 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:21.202391 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.696778 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:23.278313 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:25.279270 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:24.117802 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:24.132716 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:24.132796 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:24.168658 1120970 cri.go:89] found id: ""
	I0729 19:48:24.168689 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.168698 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:24.168703 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:24.168763 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:24.211499 1120970 cri.go:89] found id: ""
	I0729 19:48:24.211533 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.211543 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:24.211551 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:24.211622 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:24.244579 1120970 cri.go:89] found id: ""
	I0729 19:48:24.244607 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.244616 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:24.244622 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:24.244680 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:24.278356 1120970 cri.go:89] found id: ""
	I0729 19:48:24.278386 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.278396 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:24.278404 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:24.278469 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:24.314725 1120970 cri.go:89] found id: ""
	I0729 19:48:24.314760 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.314771 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:24.314779 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:24.314870 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:24.349743 1120970 cri.go:89] found id: ""
	I0729 19:48:24.349772 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.349781 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:24.349788 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:24.349863 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:24.382484 1120970 cri.go:89] found id: ""
	I0729 19:48:24.382511 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.382521 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:24.382529 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:24.382606 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:24.418986 1120970 cri.go:89] found id: ""
	I0729 19:48:24.419013 1120970 logs.go:276] 0 containers: []
	W0729 19:48:24.419020 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:24.419030 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:24.419052 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:24.456725 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:24.456762 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:24.508592 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:24.508628 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:24.521610 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:24.521642 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:24.591015 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:24.591041 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:24.591058 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.170099 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:27.183543 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:27.183619 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:27.218044 1120970 cri.go:89] found id: ""
	I0729 19:48:27.218075 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.218083 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:27.218090 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:27.218154 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:27.251613 1120970 cri.go:89] found id: ""
	I0729 19:48:27.251638 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.251646 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:27.251651 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:27.251707 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:27.291540 1120970 cri.go:89] found id: ""
	I0729 19:48:27.291569 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.291578 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:27.291586 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:27.291650 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:27.322921 1120970 cri.go:89] found id: ""
	I0729 19:48:27.322956 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.322965 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:27.322973 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:27.323042 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:27.360337 1120970 cri.go:89] found id: ""
	I0729 19:48:27.360370 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.360381 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:27.360389 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:27.360448 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:27.398445 1120970 cri.go:89] found id: ""
	I0729 19:48:27.398490 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.398502 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:27.398510 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:27.398577 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:27.432147 1120970 cri.go:89] found id: ""
	I0729 19:48:27.432176 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.432184 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:27.432191 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:27.432260 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:27.471347 1120970 cri.go:89] found id: ""
	I0729 19:48:27.471380 1120970 logs.go:276] 0 containers: []
	W0729 19:48:27.471392 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:27.471404 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:27.471421 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:27.526997 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:27.527032 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:27.541189 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:27.541219 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:27.612270 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:27.612293 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:27.612310 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:27.688940 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:27.688979 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:26.195903 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:28.696936 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:27.778151 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.278900 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:30.228578 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:30.241827 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:30.241896 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:30.275201 1120970 cri.go:89] found id: ""
	I0729 19:48:30.275230 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.275241 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:30.275249 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:30.275305 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:30.313499 1120970 cri.go:89] found id: ""
	I0729 19:48:30.313526 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.313534 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:30.313540 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:30.313593 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:30.348036 1120970 cri.go:89] found id: ""
	I0729 19:48:30.348063 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.348072 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:30.348078 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:30.348148 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:30.383104 1120970 cri.go:89] found id: ""
	I0729 19:48:30.383135 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.383147 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:30.383155 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:30.383244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:30.421367 1120970 cri.go:89] found id: ""
	I0729 19:48:30.421395 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.421404 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:30.421418 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:30.421484 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:30.460712 1120970 cri.go:89] found id: ""
	I0729 19:48:30.460746 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.460758 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:30.460767 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:30.460832 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:30.503728 1120970 cri.go:89] found id: ""
	I0729 19:48:30.503757 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.503769 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:30.503777 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:30.503842 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:30.544605 1120970 cri.go:89] found id: ""
	I0729 19:48:30.544639 1120970 logs.go:276] 0 containers: []
	W0729 19:48:30.544651 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:30.544663 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:30.544680 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:30.559616 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:30.559652 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:30.634554 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:30.634578 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:30.634599 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:30.717930 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:30.717968 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:30.759109 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:30.759140 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:31.194967 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.195033 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:32.777218 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:34.777917 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:33.313550 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:33.327425 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:33.327483 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:33.369009 1120970 cri.go:89] found id: ""
	I0729 19:48:33.369037 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.369047 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:33.369054 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:33.369121 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:33.406459 1120970 cri.go:89] found id: ""
	I0729 19:48:33.406491 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.406501 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:33.406509 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:33.406579 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:33.444176 1120970 cri.go:89] found id: ""
	I0729 19:48:33.444210 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.444222 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:33.444230 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:33.444297 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:33.482882 1120970 cri.go:89] found id: ""
	I0729 19:48:33.482977 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.482994 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:33.483002 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:33.483070 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:33.516972 1120970 cri.go:89] found id: ""
	I0729 19:48:33.516999 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.517009 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:33.517015 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:33.517077 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:33.557559 1120970 cri.go:89] found id: ""
	I0729 19:48:33.557598 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.557620 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:33.557629 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:33.557699 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:33.592756 1120970 cri.go:89] found id: ""
	I0729 19:48:33.592786 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.592793 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:33.592799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:33.592858 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:33.626104 1120970 cri.go:89] found id: ""
	I0729 19:48:33.626136 1120970 logs.go:276] 0 containers: []
	W0729 19:48:33.626147 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:33.626158 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:33.626175 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:33.680456 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:33.680498 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:33.694700 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:33.694732 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:33.770833 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:33.770863 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:33.770881 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:33.847537 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:33.847571 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:36.390251 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:36.403265 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:36.403377 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:36.437189 1120970 cri.go:89] found id: ""
	I0729 19:48:36.437216 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.437227 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:36.437235 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:36.437296 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:36.471025 1120970 cri.go:89] found id: ""
	I0729 19:48:36.471056 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.471067 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:36.471083 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:36.471143 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:36.504736 1120970 cri.go:89] found id: ""
	I0729 19:48:36.504767 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.504779 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:36.504787 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:36.504852 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:36.537866 1120970 cri.go:89] found id: ""
	I0729 19:48:36.537893 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.537903 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:36.537911 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:36.537974 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:36.574083 1120970 cri.go:89] found id: ""
	I0729 19:48:36.574116 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.574127 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:36.574136 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:36.574199 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:36.613130 1120970 cri.go:89] found id: ""
	I0729 19:48:36.613160 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.613172 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:36.613179 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:36.613244 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:36.649617 1120970 cri.go:89] found id: ""
	I0729 19:48:36.649644 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.649655 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:36.649663 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:36.649731 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:36.688729 1120970 cri.go:89] found id: ""
	I0729 19:48:36.688765 1120970 logs.go:276] 0 containers: []
	W0729 19:48:36.688777 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:36.688790 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:36.688807 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:36.741483 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:36.741524 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:36.759730 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:36.759777 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:36.847102 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:36.847129 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:36.847148 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:36.928364 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:36.928403 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:35.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.195691 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:36.780250 1120587 pod_ready.go:102] pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:38.272543 1120587 pod_ready.go:81] duration metric: took 4m0.000382733s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" ...
	E0729 19:48:38.272574 1120587 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-bvkv6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:48:38.272595 1120587 pod_ready.go:38] duration metric: took 4m12.412522427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:48:38.272622 1120587 kubeadm.go:597] duration metric: took 4m20.569295588s to restartPrimaryControlPlane
	W0729 19:48:38.272693 1120587 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:38.272722 1120587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:39.468501 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:39.482102 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:48:39.482180 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:48:39.522722 1120970 cri.go:89] found id: ""
	I0729 19:48:39.522754 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.522763 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:48:39.522769 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:48:39.522824 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:48:39.561057 1120970 cri.go:89] found id: ""
	I0729 19:48:39.561088 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.561098 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:48:39.561106 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:48:39.561185 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:48:39.599802 1120970 cri.go:89] found id: ""
	I0729 19:48:39.599831 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.599840 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:48:39.599848 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:48:39.599920 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:48:39.634935 1120970 cri.go:89] found id: ""
	I0729 19:48:39.634966 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.634978 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:48:39.634986 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:48:39.635054 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:48:39.670682 1120970 cri.go:89] found id: ""
	I0729 19:48:39.670713 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.670721 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:48:39.670728 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:48:39.670798 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:48:39.705988 1120970 cri.go:89] found id: ""
	I0729 19:48:39.706024 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.706034 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:48:39.706042 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:48:39.706112 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:48:39.743886 1120970 cri.go:89] found id: ""
	I0729 19:48:39.743919 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.743931 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:48:39.743938 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:48:39.744007 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:48:39.781966 1120970 cri.go:89] found id: ""
	I0729 19:48:39.782000 1120970 logs.go:276] 0 containers: []
	W0729 19:48:39.782011 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:48:39.782023 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:48:39.782040 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:48:39.836034 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:48:39.836074 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:48:39.849330 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:48:39.849365 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:48:39.922803 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:48:39.922832 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:48:39.922860 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:48:40.006015 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:48:40.006061 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 19:48:42.556277 1120970 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:48:42.569657 1120970 kubeadm.go:597] duration metric: took 4m2.867642237s to restartPrimaryControlPlane
	W0729 19:48:42.569742 1120970 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:48:42.569773 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:48:40.695917 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.195442 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:43.033878 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:43.048499 1120970 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:43.058936 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:43.070746 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:43.070766 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:43.070814 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:43.079568 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:43.079631 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:43.088576 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:43.097654 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:43.097723 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:43.107155 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.117105 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:43.117152 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:43.126933 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:43.136114 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:43.136162 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:43.145196 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:43.365894 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:45.695643 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:47.696055 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:48.051556 1120280 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.418935975s)
	I0729 19:48:48.051634 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:48:48.066832 1120280 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:48:48.076768 1120280 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:48:48.086203 1120280 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:48:48.086224 1120280 kubeadm.go:157] found existing configuration files:
	
	I0729 19:48:48.086269 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:48:48.095286 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:48:48.095344 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:48:48.104238 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:48:48.113232 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:48:48.113287 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:48:48.122679 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.131511 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:48:48.131565 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:48:48.140110 1120280 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:48:48.148601 1120280 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:48:48.148650 1120280 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:48:48.157410 1120280 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:48:48.352715 1120280 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:48:50.195418 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:52.696285 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.332520 1120280 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:48:56.332571 1120280 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:48:56.332675 1120280 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:48:56.332770 1120280 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:48:56.332853 1120280 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:48:56.332967 1120280 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:48:56.334322 1120280 out.go:204]   - Generating certificates and keys ...
	I0729 19:48:56.334409 1120280 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:48:56.334490 1120280 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:48:56.334605 1120280 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:48:56.334688 1120280 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:48:56.334798 1120280 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:48:56.334897 1120280 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:48:56.334984 1120280 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:48:56.335060 1120280 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:48:56.335161 1120280 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:48:56.335270 1120280 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:48:56.335324 1120280 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:48:56.335374 1120280 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:48:56.335423 1120280 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:48:56.335473 1120280 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:48:56.335532 1120280 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:48:56.335614 1120280 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:48:56.335675 1120280 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:48:56.335785 1120280 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:48:56.335884 1120280 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:48:56.336979 1120280 out.go:204]   - Booting up control plane ...
	I0729 19:48:56.337065 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:48:56.337133 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:48:56.337201 1120280 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:48:56.337326 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:48:56.337427 1120280 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:48:56.337498 1120280 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:48:56.337647 1120280 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:48:56.337714 1120280 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:48:56.337762 1120280 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.952649ms
	I0729 19:48:56.337821 1120280 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:48:56.337868 1120280 kubeadm.go:310] [api-check] The API server is healthy after 5.002178003s
	I0729 19:48:56.337955 1120280 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:48:56.338084 1120280 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:48:56.338139 1120280 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:48:56.338289 1120280 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-358053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:48:56.338342 1120280 kubeadm.go:310] [bootstrap-token] Using token: 4fomec.1511vtef88eg64ao
	I0729 19:48:56.339522 1120280 out.go:204]   - Configuring RBAC rules ...
	I0729 19:48:56.339612 1120280 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:48:56.339681 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:48:56.339857 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:48:56.339995 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:48:56.340156 1120280 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:48:56.340283 1120280 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:48:56.340438 1120280 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:48:56.340511 1120280 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:48:56.340575 1120280 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:48:56.340585 1120280 kubeadm.go:310] 
	I0729 19:48:56.340671 1120280 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:48:56.340681 1120280 kubeadm.go:310] 
	I0729 19:48:56.340762 1120280 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:48:56.340781 1120280 kubeadm.go:310] 
	I0729 19:48:56.340812 1120280 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:48:56.340861 1120280 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:48:56.340904 1120280 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:48:56.340907 1120280 kubeadm.go:310] 
	I0729 19:48:56.340972 1120280 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:48:56.340978 1120280 kubeadm.go:310] 
	I0729 19:48:56.341034 1120280 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:48:56.341038 1120280 kubeadm.go:310] 
	I0729 19:48:56.341083 1120280 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:48:56.341151 1120280 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:48:56.341209 1120280 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:48:56.341219 1120280 kubeadm.go:310] 
	I0729 19:48:56.341285 1120280 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:48:56.341369 1120280 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:48:56.341376 1120280 kubeadm.go:310] 
	I0729 19:48:56.341454 1120280 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.341602 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:48:56.341636 1120280 kubeadm.go:310] 	--control-plane 
	I0729 19:48:56.341642 1120280 kubeadm.go:310] 
	I0729 19:48:56.341752 1120280 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:48:56.341769 1120280 kubeadm.go:310] 
	I0729 19:48:56.341886 1120280 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4fomec.1511vtef88eg64ao \
	I0729 19:48:56.342018 1120280 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:48:56.342034 1120280 cni.go:84] Creating CNI manager for ""
	I0729 19:48:56.342044 1120280 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:48:56.343241 1120280 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:48:55.195151 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:57.195200 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:48:56.344247 1120280 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:48:56.355941 1120280 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:48:56.377835 1120280 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:48:56.377932 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:56.377958 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-358053 minikube.k8s.io/updated_at=2024_07_29T19_48_56_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=embed-certs-358053 minikube.k8s.io/primary=true
	I0729 19:48:56.394308 1120280 ops.go:34] apiserver oom_adj: -16
	I0729 19:48:56.575183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:57.575985 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.075805 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:58.576183 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.075390 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.576159 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:48:59.195343 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:01.696180 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:00.075628 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:00.575675 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.075529 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:01.576070 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.076065 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:02.575283 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.076139 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:03.575717 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.076142 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.575998 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:04.194697 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:06.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:08.695788 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:05.075222 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:05.575723 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.075652 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:06.575680 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.075645 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:07.575900 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.075951 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:08.576178 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.076094 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:09.575480 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.075954 1120280 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:10.185328 1120280 kubeadm.go:1113] duration metric: took 13.807462033s to wait for elevateKubeSystemPrivileges
	I0729 19:49:10.185372 1120280 kubeadm.go:394] duration metric: took 5m12.173830361s to StartCluster
	I0729 19:49:10.185408 1120280 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.185614 1120280 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:10.188419 1120280 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:10.188761 1120280 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:10.188839 1120280 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:10.188929 1120280 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-358053"
	I0729 19:49:10.188939 1120280 config.go:182] Loaded profile config "embed-certs-358053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:10.188968 1120280 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-358053"
	I0729 19:49:10.188957 1120280 addons.go:69] Setting default-storageclass=true in profile "embed-certs-358053"
	W0729 19:49:10.188978 1120280 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:10.188967 1120280 addons.go:69] Setting metrics-server=true in profile "embed-certs-358053"
	I0729 19:49:10.189017 1120280 addons.go:234] Setting addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:10.189016 1120280 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-358053"
	I0729 19:49:10.189023 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	W0729 19:49:10.189026 1120280 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:10.189059 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.189460 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189461 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189493 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189464 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.189513 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.189539 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.192359 1120280 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:10.193480 1120280 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:10.210772 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43059
	I0729 19:49:10.210789 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37187
	I0729 19:49:10.210777 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43007
	I0729 19:49:10.211410 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211444 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211415 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.211943 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.211961 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212067 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212082 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212104 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.212129 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.212485 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212490 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.212517 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.213028 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213061 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.213275 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.213666 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.213693 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.217668 1120280 addons.go:234] Setting addon default-storageclass=true in "embed-certs-358053"
	W0729 19:49:10.217694 1120280 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:10.217729 1120280 host.go:66] Checking if "embed-certs-358053" exists ...
	I0729 19:49:10.218106 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.218134 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.233308 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0729 19:49:10.233515 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I0729 19:49:10.233923 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234065 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.234486 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234511 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234622 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.234646 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.234881 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.235095 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.235124 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.236407 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
	I0729 19:49:10.236417 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.236976 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.237510 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.237529 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.237603 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238068 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.238462 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.238685 1120280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:10.238717 1120280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:10.239583 1120280 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:10.240247 1120280 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:09.758990 1120587 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.486239671s)
	I0729 19:49:09.759083 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:09.774752 1120587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:09.785968 1120587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:09.796242 1120587 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:09.796267 1120587 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:09.796320 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0729 19:49:09.805373 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:09.805446 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:09.814418 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0729 19:49:09.822923 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:09.822977 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:09.831784 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.840631 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:09.840670 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:09.850149 1120587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0729 19:49:09.858648 1120587 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:09.858685 1120587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:09.868191 1120587 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:09.918324 1120587 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:49:09.918439 1120587 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:10.082807 1120587 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:10.082977 1120587 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:10.083133 1120587 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:49:10.346327 1120587 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:10.347784 1120587 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:10.347895 1120587 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:10.347974 1120587 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:10.348065 1120587 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:10.348152 1120587 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:10.348236 1120587 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:10.348312 1120587 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:10.348395 1120587 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:10.348479 1120587 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:10.348573 1120587 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:10.348672 1120587 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:10.348726 1120587 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:10.348806 1120587 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:10.558934 1120587 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:10.733434 1120587 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:11.026079 1120587 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:11.159826 1120587 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:11.277696 1120587 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:11.278383 1120587 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:11.281036 1120587 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:10.240921 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:10.240936 1120280 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:10.240952 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.241651 1120280 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.241674 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:10.241693 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.245407 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245440 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245923 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245922 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.245947 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.245967 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.246145 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246329 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246372 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.246511 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246672 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.246688 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.246866 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.246988 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.256682 1120280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43353
	I0729 19:49:10.257146 1120280 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:10.257747 1120280 main.go:141] libmachine: Using API Version  1
	I0729 19:49:10.257760 1120280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:10.258021 1120280 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:10.258264 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetState
	I0729 19:49:10.260096 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .DriverName
	I0729 19:49:10.260305 1120280 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:10.260322 1120280 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:10.260341 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHHostname
	I0729 19:49:10.263479 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.263914 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:9e:78", ip: ""} in network mk-embed-certs-358053: {Iface:virbr3 ExpiryTime:2024-07-29 20:43:42 +0000 UTC Type:0 Mac:52:54:00:b7:9e:78 Iaid: IPaddr:192.168.61.201 Prefix:24 Hostname:embed-certs-358053 Clientid:01:52:54:00:b7:9e:78}
	I0729 19:49:10.263942 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | domain embed-certs-358053 has defined IP address 192.168.61.201 and MAC address 52:54:00:b7:9e:78 in network mk-embed-certs-358053
	I0729 19:49:10.264099 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHPort
	I0729 19:49:10.264270 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHKeyPath
	I0729 19:49:10.264457 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .GetSSHUsername
	I0729 19:49:10.264566 1120280 sshutil.go:53] new ssh client: &{IP:192.168.61.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/embed-certs-358053/id_rsa Username:docker}
	I0729 19:49:10.461598 1120280 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:10.483007 1120280 node_ready.go:35] waiting up to 6m0s for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492573 1120280 node_ready.go:49] node "embed-certs-358053" has status "Ready":"True"
	I0729 19:49:10.492601 1120280 node_ready.go:38] duration metric: took 9.562848ms for node "embed-certs-358053" to be "Ready" ...
	I0729 19:49:10.492611 1120280 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:10.498908 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:10.574473 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:10.574500 1120280 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:10.596936 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:10.598355 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:10.598373 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:10.618403 1120280 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.618430 1120280 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:10.642761 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:10.717699 1120280 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:11.218300 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218321 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.218615 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.218664 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.218676 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.218687 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.218695 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.219043 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.219060 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758222 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.115410935s)
	I0729 19:49:11.758294 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758311 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758416 1120280 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.040630579s)
	I0729 19:49:11.758489 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758534 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.758645 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.758666 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.758677 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.758684 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759085 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759123 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759133 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759140 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759151 1120280 addons.go:475] Verifying addon metrics-server=true in "embed-certs-358053"
	I0729 19:49:11.759242 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759251 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.759265 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.759273 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.759556 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.759551 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.759576 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.821869 1120280 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:11.821904 1120280 main.go:141] libmachine: (embed-certs-358053) Calling .Close
	I0729 19:49:11.822218 1120280 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:11.822239 1120280 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:11.822278 1120280 main.go:141] libmachine: (embed-certs-358053) DBG | Closing plugin on server side
	I0729 19:49:11.825097 1120280 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I0729 19:49:10.696468 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:12.696754 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:11.826501 1120280 addons.go:510] duration metric: took 1.63766283s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I0729 19:49:12.505464 1120280 pod_ready.go:102] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:13.005934 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.005962 1120280 pod_ready.go:81] duration metric: took 2.507029118s for pod "coredns-7db6d8ff4d-62wzl" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.005972 1120280 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010162 1120280 pod_ready.go:92] pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.010183 1120280 pod_ready.go:81] duration metric: took 4.204506ms for pod "coredns-7db6d8ff4d-rnpqh" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.010191 1120280 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013871 1120280 pod_ready.go:92] pod "etcd-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.013888 1120280 pod_ready.go:81] duration metric: took 3.691352ms for pod "etcd-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.013895 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017787 1120280 pod_ready.go:92] pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.017804 1120280 pod_ready.go:81] duration metric: took 3.903153ms for pod "kube-apiserver-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.017812 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021807 1120280 pod_ready.go:92] pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.021826 1120280 pod_ready.go:81] duration metric: took 4.00839ms for pod "kube-controller-manager-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.021834 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404663 1120280 pod_ready.go:92] pod "kube-proxy-phmxr" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.404691 1120280 pod_ready.go:81] duration metric: took 382.850052ms for pod "kube-proxy-phmxr" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.404703 1120280 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803883 1120280 pod_ready.go:92] pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:13.803913 1120280 pod_ready.go:81] duration metric: took 399.201369ms for pod "kube-scheduler-embed-certs-358053" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:13.803924 1120280 pod_ready.go:38] duration metric: took 3.31130157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:13.803944 1120280 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:13.804012 1120280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:13.819097 1120280 api_server.go:72] duration metric: took 3.63029481s to wait for apiserver process to appear ...
	I0729 19:49:13.819127 1120280 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:13.819158 1120280 api_server.go:253] Checking apiserver healthz at https://192.168.61.201:8443/healthz ...
	I0729 19:49:13.825125 1120280 api_server.go:279] https://192.168.61.201:8443/healthz returned 200:
	ok
	I0729 19:49:13.826172 1120280 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:13.826197 1120280 api_server.go:131] duration metric: took 7.062144ms to wait for apiserver health ...
	I0729 19:49:13.826206 1120280 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:14.006726 1120280 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:14.006762 1120280 system_pods.go:61] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.006769 1120280 system_pods.go:61] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.006774 1120280 system_pods.go:61] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.006780 1120280 system_pods.go:61] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.006786 1120280 system_pods.go:61] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.006790 1120280 system_pods.go:61] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.006795 1120280 system_pods.go:61] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.006805 1120280 system_pods.go:61] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.006810 1120280 system_pods.go:61] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.006823 1120280 system_pods.go:74] duration metric: took 180.607932ms to wait for pod list to return data ...
	I0729 19:49:14.006836 1120280 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:14.203009 1120280 default_sa.go:45] found service account: "default"
	I0729 19:49:14.203034 1120280 default_sa.go:55] duration metric: took 196.19138ms for default service account to be created ...
	I0729 19:49:14.203043 1120280 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:14.407217 1120280 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:14.407253 1120280 system_pods.go:89] "coredns-7db6d8ff4d-62wzl" [c0cf63a3-98a8-4107-8b51-3b9a39695a6c] Running
	I0729 19:49:14.407261 1120280 system_pods.go:89] "coredns-7db6d8ff4d-rnpqh" [fd0f6d7f-a55a-4556-b5e3-8ed4e555aaea] Running
	I0729 19:49:14.407267 1120280 system_pods.go:89] "etcd-embed-certs-358053" [b4e6558f-195a-449e-83fb-3ad49f1f80b0] Running
	I0729 19:49:14.407273 1120280 system_pods.go:89] "kube-apiserver-embed-certs-358053" [8ce54a21-879a-44f6-9209-699b22fe60a3] Running
	I0729 19:49:14.407279 1120280 system_pods.go:89] "kube-controller-manager-embed-certs-358053" [658a8652-2864-4825-8239-cfbe96e604ab] Running
	I0729 19:49:14.407285 1120280 system_pods.go:89] "kube-proxy-phmxr" [73020161-bb80-445c-ae4f-d1486e18a32e] Running
	I0729 19:49:14.407291 1120280 system_pods.go:89] "kube-scheduler-embed-certs-358053" [f7734e37-b41d-495a-8098-c721b9d56d7c] Running
	I0729 19:49:14.407305 1120280 system_pods.go:89] "metrics-server-569cc877fc-gpz72" [cb992ca6-11f3-4826-b701-6789d3e3e9c0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:14.407316 1120280 system_pods.go:89] "storage-provisioner" [7c484501-fa8b-4d2d-b7c7-faea3b6b0891] Running
	I0729 19:49:14.407327 1120280 system_pods.go:126] duration metric: took 204.276761ms to wait for k8s-apps to be running ...
	I0729 19:49:14.407338 1120280 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:14.407396 1120280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:14.422219 1120280 system_svc.go:56] duration metric: took 14.869175ms WaitForService to wait for kubelet
	I0729 19:49:14.422258 1120280 kubeadm.go:582] duration metric: took 4.233462765s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:14.422285 1120280 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:14.603042 1120280 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:14.603067 1120280 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:14.603079 1120280 node_conditions.go:105] duration metric: took 180.789494ms to run NodePressure ...
	I0729 19:49:14.603091 1120280 start.go:241] waiting for startup goroutines ...
	I0729 19:49:14.603098 1120280 start.go:246] waiting for cluster config update ...
	I0729 19:49:14.603108 1120280 start.go:255] writing updated cluster config ...
	I0729 19:49:14.603448 1120280 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:14.669359 1120280 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:14.671285 1120280 out.go:177] * Done! kubectl is now configured to use "embed-certs-358053" cluster and "default" namespace by default
	I0729 19:49:11.282743 1120587 out.go:204]   - Booting up control plane ...
	I0729 19:49:11.282887 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:11.283393 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:11.285899 1120587 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:11.306343 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:11.308692 1120587 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:11.308776 1120587 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:11.454703 1120587 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:11.454809 1120587 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:11.957070 1120587 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.339287ms
	I0729 19:49:11.957173 1120587 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:49:16.958829 1120587 kubeadm.go:310] [api-check] The API server is healthy after 5.001114911s
	I0729 19:49:16.975545 1120587 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:49:16.992433 1120587 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:49:17.029655 1120587 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:49:17.029911 1120587 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-024652 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:49:17.039761 1120587 kubeadm.go:310] [bootstrap-token] Using token: wivqw5.o681p65fyob7uctp
	I0729 19:49:17.040967 1120587 out.go:204]   - Configuring RBAC rules ...
	I0729 19:49:17.041098 1120587 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:49:17.047095 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:49:17.054741 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:49:17.057791 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:49:17.064906 1120587 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:49:17.068354 1120587 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:49:17.365660 1120587 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:49:17.803646 1120587 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:49:18.365942 1120587 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:49:18.367149 1120587 kubeadm.go:310] 
	I0729 19:49:18.367230 1120587 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:49:18.367239 1120587 kubeadm.go:310] 
	I0729 19:49:18.367301 1120587 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:49:18.367308 1120587 kubeadm.go:310] 
	I0729 19:49:18.367356 1120587 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:49:18.367435 1120587 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:49:18.367484 1120587 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:49:18.367490 1120587 kubeadm.go:310] 
	I0729 19:49:18.367564 1120587 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:49:18.367580 1120587 kubeadm.go:310] 
	I0729 19:49:18.367670 1120587 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:49:18.367689 1120587 kubeadm.go:310] 
	I0729 19:49:18.367767 1120587 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:49:18.367886 1120587 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:49:18.367990 1120587 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:49:18.368004 1120587 kubeadm.go:310] 
	I0729 19:49:18.368134 1120587 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:49:18.368245 1120587 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:49:18.368255 1120587 kubeadm.go:310] 
	I0729 19:49:18.368374 1120587 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368509 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:49:18.368547 1120587 kubeadm.go:310] 	--control-plane 
	I0729 19:49:18.368555 1120587 kubeadm.go:310] 
	I0729 19:49:18.368665 1120587 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:49:18.368675 1120587 kubeadm.go:310] 
	I0729 19:49:18.368786 1120587 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token wivqw5.o681p65fyob7uctp \
	I0729 19:49:18.368926 1120587 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:49:18.369333 1120587 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:49:18.369382 1120587 cni.go:84] Creating CNI manager for ""
	I0729 19:49:18.369398 1120587 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:49:18.371718 1120587 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:49:15.194685 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:17.195094 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:18.372851 1120587 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:49:18.385204 1120587 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:49:18.404504 1120587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:49:18.404610 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:18.404616 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-024652 minikube.k8s.io/updated_at=2024_07_29T19_49_18_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=default-k8s-diff-port-024652 minikube.k8s.io/primary=true
	I0729 19:49:18.442539 1120587 ops.go:34] apiserver oom_adj: -16
	I0729 19:49:18.580986 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.081106 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.581681 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.081254 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:20.581320 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:21.081977 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:19.195234 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.694987 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:23.695591 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:21.581543 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.081511 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:22.581732 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.081975 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:23.581374 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.081970 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:24.581928 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.081446 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.581218 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:26.081680 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:25.695771 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:27.698874 1119948 pod_ready.go:102] pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:26.581008 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.081974 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:27.581500 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.082002 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:28.581979 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.081223 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:29.581078 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.081834 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:30.581191 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.081737 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.581832 1120587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:49:31.661893 1120587 kubeadm.go:1113] duration metric: took 13.257342088s to wait for elevateKubeSystemPrivileges
	I0729 19:49:31.661933 1120587 kubeadm.go:394] duration metric: took 5m14.024337116s to StartCluster
	I0729 19:49:31.661952 1120587 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.662031 1120587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:49:31.663828 1120587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:49:31.664068 1120587 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.100 Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:49:31.664116 1120587 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:49:31.664229 1120587 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664249 1120587 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664265 1120587 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664274 1120587 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:49:31.664265 1120587 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-024652"
	I0729 19:49:31.664286 1120587 config.go:182] Loaded profile config "default-k8s-diff-port-024652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:49:31.664293 1120587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-024652"
	I0729 19:49:31.664313 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664318 1120587 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.664330 1120587 addons.go:243] addon metrics-server should already be in state true
	I0729 19:49:31.664370 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.664689 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664724 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664775 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664778 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.664817 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.664827 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.665472 1120587 out.go:177] * Verifying Kubernetes components...
	I0729 19:49:31.666773 1120587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:49:31.684886 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0729 19:49:31.684948 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40365
	I0729 19:49:31.685049 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46525
	I0729 19:49:31.685394 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685443 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685506 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.685916 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685936 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.685961 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.685982 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686343 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.686363 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686378 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.686367 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.686564 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.686713 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.687028 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687071 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.687291 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.687340 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.690159 1120587 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-024652"
	W0729 19:49:31.690177 1120587 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:49:31.690208 1120587 host.go:66] Checking if "default-k8s-diff-port-024652" exists ...
	I0729 19:49:31.690543 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.690586 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.705387 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0729 19:49:31.705778 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I0729 19:49:31.706027 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706144 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706207 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
	I0729 19:49:31.706633 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.706652 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.706730 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.706990 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707009 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707198 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.707218 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.707376 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707429 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707627 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.707689 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.707861 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.708016 1120587 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:49:31.708065 1120587 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:49:31.710254 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.710315 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.711981 1120587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:49:31.711996 1120587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:49:31.713155 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:49:31.713179 1120587 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:49:31.713201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.713255 1120587 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:31.713270 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:49:31.713289 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.717458 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718017 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.718042 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718355 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.718503 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.718555 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.718750 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.718888 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.719190 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.719242 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.719255 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.719400 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.719536 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.719630 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.726052 1120587 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42897
	I0729 19:49:31.726530 1120587 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:49:31.727089 1120587 main.go:141] libmachine: Using API Version  1
	I0729 19:49:31.727106 1120587 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:49:31.727404 1120587 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:49:31.727585 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetState
	I0729 19:49:31.729111 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .DriverName
	I0729 19:49:31.729730 1120587 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.729832 1120587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:49:31.729853 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHHostname
	I0729 19:49:31.733855 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734290 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:73:cb", ip: ""} in network mk-default-k8s-diff-port-024652: {Iface:virbr4 ExpiryTime:2024-07-29 20:44:03 +0000 UTC Type:0 Mac:52:54:00:4c:73:cb Iaid: IPaddr:192.168.72.100 Prefix:24 Hostname:default-k8s-diff-port-024652 Clientid:01:52:54:00:4c:73:cb}
	I0729 19:49:31.734307 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | domain default-k8s-diff-port-024652 has defined IP address 192.168.72.100 and MAC address 52:54:00:4c:73:cb in network mk-default-k8s-diff-port-024652
	I0729 19:49:31.734528 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHPort
	I0729 19:49:31.734735 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHKeyPath
	I0729 19:49:31.734923 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .GetSSHUsername
	I0729 19:49:31.735104 1120587 sshutil.go:53] new ssh client: &{IP:192.168.72.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/default-k8s-diff-port-024652/id_rsa Username:docker}
	I0729 19:49:31.896299 1120587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:49:31.916363 1120587 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946258 1120587 node_ready.go:49] node "default-k8s-diff-port-024652" has status "Ready":"True"
	I0729 19:49:31.946286 1120587 node_ready.go:38] duration metric: took 29.887552ms for node "default-k8s-diff-port-024652" to be "Ready" ...
	I0729 19:49:31.946297 1120587 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:31.986320 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:49:31.986901 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:32.008401 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:49:32.008420 1120587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:49:32.033950 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:49:32.060771 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:49:32.060808 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:49:32.108557 1120587 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.108587 1120587 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:49:32.153081 1120587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:49:32.234814 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.234854 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235187 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235247 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.235260 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.235259 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.235270 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.235530 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.235546 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240556 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.240572 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.240859 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.240880 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.240887 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.510172 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510201 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.510518 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.510535 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.510558 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.510566 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.511002 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.511031 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.511053 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.755803 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.755828 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756119 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756135 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756144 1120587 main.go:141] libmachine: Making call to close driver server
	I0729 19:49:32.756151 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) Calling .Close
	I0729 19:49:32.756432 1120587 main.go:141] libmachine: (default-k8s-diff-port-024652) DBG | Closing plugin on server side
	I0729 19:49:32.756476 1120587 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:49:32.756488 1120587 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:49:32.756502 1120587 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-024652"
	I0729 19:49:32.758693 1120587 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0729 19:49:29.689616 1119948 pod_ready.go:81] duration metric: took 4m0.001003902s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" ...
	E0729 19:49:29.689644 1119948 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-78fcd8795b-pcx9w" in "kube-system" namespace to be "Ready" (will not retry!)
	I0729 19:49:29.689670 1119948 pod_ready.go:38] duration metric: took 4m12.210774413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:29.689724 1119948 kubeadm.go:597] duration metric: took 4m20.557808792s to restartPrimaryControlPlane
	W0729 19:49:29.689815 1119948 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0729 19:49:29.689855 1119948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:49:32.759744 1120587 addons.go:510] duration metric: took 1.095628452s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0729 19:49:33.998542 1120587 pod_ready.go:102] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"False"
	I0729 19:49:34.993504 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.993529 1120587 pod_ready.go:81] duration metric: took 3.006601304s for pod "coredns-7db6d8ff4d-wqbpm" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.993538 1120587 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999514 1120587 pod_ready.go:92] pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:34.999543 1120587 pod_ready.go:81] duration metric: took 5.998397ms for pod "coredns-7db6d8ff4d-z8mxw" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:34.999556 1120587 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004591 1120587 pod_ready.go:92] pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.004615 1120587 pod_ready.go:81] duration metric: took 5.050736ms for pod "etcd-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.004626 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009617 1120587 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.009639 1120587 pod_ready.go:81] duration metric: took 5.004922ms for pod "kube-apiserver-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.009649 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015860 1120587 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.015879 1120587 pod_ready.go:81] duration metric: took 6.221932ms for pod "kube-controller-manager-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.015887 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392558 1120587 pod_ready.go:92] pod "kube-proxy-wfr8f" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.392595 1120587 pod_ready.go:81] duration metric: took 376.701757ms for pod "kube-proxy-wfr8f" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.392604 1120587 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791324 1120587 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace has status "Ready":"True"
	I0729 19:49:35.791357 1120587 pod_ready.go:81] duration metric: took 398.744718ms for pod "kube-scheduler-default-k8s-diff-port-024652" in "kube-system" namespace to be "Ready" ...
	I0729 19:49:35.791368 1120587 pod_ready.go:38] duration metric: took 3.84505744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:49:35.791389 1120587 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:49:35.791451 1120587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:49:35.808765 1120587 api_server.go:72] duration metric: took 4.144664884s to wait for apiserver process to appear ...
	I0729 19:49:35.808795 1120587 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:49:35.808816 1120587 api_server.go:253] Checking apiserver healthz at https://192.168.72.100:8444/healthz ...
	I0729 19:49:35.813053 1120587 api_server.go:279] https://192.168.72.100:8444/healthz returned 200:
	ok
	I0729 19:49:35.814108 1120587 api_server.go:141] control plane version: v1.30.3
	I0729 19:49:35.814129 1120587 api_server.go:131] duration metric: took 5.326691ms to wait for apiserver health ...
	I0729 19:49:35.814135 1120587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:49:35.994230 1120587 system_pods.go:59] 9 kube-system pods found
	I0729 19:49:35.994267 1120587 system_pods.go:61] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:35.994274 1120587 system_pods.go:61] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:35.994280 1120587 system_pods.go:61] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:35.994285 1120587 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:35.994293 1120587 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:35.994300 1120587 system_pods.go:61] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:35.994305 1120587 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:35.994314 1120587 system_pods.go:61] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:35.994318 1120587 system_pods.go:61] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:35.994329 1120587 system_pods.go:74] duration metric: took 180.186983ms to wait for pod list to return data ...
	I0729 19:49:35.994339 1120587 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:49:36.191025 1120587 default_sa.go:45] found service account: "default"
	I0729 19:49:36.191057 1120587 default_sa.go:55] duration metric: took 196.710231ms for default service account to be created ...
	I0729 19:49:36.191066 1120587 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:49:36.395188 1120587 system_pods.go:86] 9 kube-system pods found
	I0729 19:49:36.395218 1120587 system_pods.go:89] "coredns-7db6d8ff4d-wqbpm" [96db74e9-67ca-4065-8758-a27a14b6d3d5] Running
	I0729 19:49:36.395224 1120587 system_pods.go:89] "coredns-7db6d8ff4d-z8mxw" [12aa4a13-f4af-4cda-b099-5e0e44836300] Running
	I0729 19:49:36.395229 1120587 system_pods.go:89] "etcd-default-k8s-diff-port-024652" [6c733608-bc36-40a8-a6d1-2fa10ee45ef7] Running
	I0729 19:49:36.395233 1120587 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-024652" [755ccaaa-70fc-4d21-bf24-55638ea6778a] Running
	I0729 19:49:36.395237 1120587 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-024652" [1ed4cda3-7de9-4562-be52-b2a5f3490979] Running
	I0729 19:49:36.395241 1120587 system_pods.go:89] "kube-proxy-wfr8f" [86699d3a-0843-4b82-b772-23c8f5b7c88a] Running
	I0729 19:49:36.395245 1120587 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-024652" [d51619f9-c388-4ca5-a3e7-2028f0f76d9a] Running
	I0729 19:49:36.395257 1120587 system_pods.go:89] "metrics-server-569cc877fc-rp2fk" [826ffadd-1c1c-4666-8c09-f43a82262912] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:49:36.395262 1120587 system_pods.go:89] "storage-provisioner" [ce612854-895f-44d4-8c33-30c3a7eff802] Running
	I0729 19:49:36.395272 1120587 system_pods.go:126] duration metric: took 204.199685ms to wait for k8s-apps to be running ...
	I0729 19:49:36.395280 1120587 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:49:36.395327 1120587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:36.414410 1120587 system_svc.go:56] duration metric: took 19.116999ms WaitForService to wait for kubelet
	I0729 19:49:36.414442 1120587 kubeadm.go:582] duration metric: took 4.750347675s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:49:36.414470 1120587 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:49:36.591019 1120587 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:49:36.591045 1120587 node_conditions.go:123] node cpu capacity is 2
	I0729 19:49:36.591058 1120587 node_conditions.go:105] duration metric: took 176.580075ms to run NodePressure ...
	I0729 19:49:36.591069 1120587 start.go:241] waiting for startup goroutines ...
	I0729 19:49:36.591076 1120587 start.go:246] waiting for cluster config update ...
	I0729 19:49:36.591086 1120587 start.go:255] writing updated cluster config ...
	I0729 19:49:36.591330 1120587 ssh_runner.go:195] Run: rm -f paused
	I0729 19:49:36.641571 1120587 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 19:49:36.643324 1120587 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-024652" cluster and "default" namespace by default
	I0729 19:49:55.819640 1119948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.129754186s)
	I0729 19:49:55.819736 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:49:55.857245 1119948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:49:55.874823 1119948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:49:55.887767 1119948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:49:55.887786 1119948 kubeadm.go:157] found existing configuration files:
	
	I0729 19:49:55.887826 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:49:55.898598 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:49:55.898659 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:49:55.919811 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:49:55.929490 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:49:55.929557 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:49:55.938832 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.952638 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:49:55.952698 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:49:55.965512 1119948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:49:55.975116 1119948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:49:55.975180 1119948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:49:55.984448 1119948 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:49:56.040488 1119948 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0-beta.0
	I0729 19:49:56.040619 1119948 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:49:56.161648 1119948 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:49:56.161792 1119948 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:49:56.161913 1119948 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0729 19:49:56.171626 1119948 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:49:56.173709 1119948 out.go:204]   - Generating certificates and keys ...
	I0729 19:49:56.173830 1119948 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:49:56.173928 1119948 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:49:56.174047 1119948 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:49:56.174143 1119948 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:49:56.174232 1119948 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:49:56.174302 1119948 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:49:56.174382 1119948 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:49:56.174453 1119948 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:49:56.174572 1119948 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:49:56.174694 1119948 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:49:56.174750 1119948 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:49:56.174830 1119948 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:49:56.246122 1119948 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:49:56.355960 1119948 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:49:56.420777 1119948 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:49:56.496969 1119948 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:49:56.583932 1119948 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:49:56.584470 1119948 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:49:56.587115 1119948 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:49:56.588779 1119948 out.go:204]   - Booting up control plane ...
	I0729 19:49:56.588912 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:49:56.588986 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:49:56.589041 1119948 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:49:56.608126 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:49:56.614632 1119948 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:49:56.614696 1119948 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:49:56.754879 1119948 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:49:56.754999 1119948 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:49:57.257324 1119948 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.327954ms
	I0729 19:49:57.257465 1119948 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:50:02.762738 1119948 kubeadm.go:310] [api-check] The API server is healthy after 5.503528666s
	I0729 19:50:02.774459 1119948 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:50:02.788865 1119948 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:50:02.826192 1119948 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:50:02.826457 1119948 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-843792 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:50:02.839359 1119948 kubeadm.go:310] [bootstrap-token] Using token: yaj2k6.6nijnxczu3nl8yfv
	I0729 19:50:02.840952 1119948 out.go:204]   - Configuring RBAC rules ...
	I0729 19:50:02.841087 1119948 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:50:02.846969 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:50:02.861696 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:50:02.866680 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:50:02.871113 1119948 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:50:02.875148 1119948 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:50:03.170084 1119948 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:50:03.622188 1119948 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:50:04.170979 1119948 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:50:04.171916 1119948 kubeadm.go:310] 
	I0729 19:50:04.172017 1119948 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:50:04.172027 1119948 kubeadm.go:310] 
	I0729 19:50:04.172139 1119948 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:50:04.172149 1119948 kubeadm.go:310] 
	I0729 19:50:04.172183 1119948 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:50:04.172258 1119948 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:50:04.172337 1119948 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:50:04.172356 1119948 kubeadm.go:310] 
	I0729 19:50:04.172451 1119948 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:50:04.172480 1119948 kubeadm.go:310] 
	I0729 19:50:04.172570 1119948 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:50:04.172581 1119948 kubeadm.go:310] 
	I0729 19:50:04.172652 1119948 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:50:04.172755 1119948 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:50:04.172861 1119948 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:50:04.172876 1119948 kubeadm.go:310] 
	I0729 19:50:04.172944 1119948 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:50:04.173046 1119948 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:50:04.173056 1119948 kubeadm.go:310] 
	I0729 19:50:04.173171 1119948 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173307 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 \
	I0729 19:50:04.173330 1119948 kubeadm.go:310] 	--control-plane 
	I0729 19:50:04.173334 1119948 kubeadm.go:310] 
	I0729 19:50:04.173405 1119948 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:50:04.173411 1119948 kubeadm.go:310] 
	I0729 19:50:04.173493 1119948 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token yaj2k6.6nijnxczu3nl8yfv \
	I0729 19:50:04.173666 1119948 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:53e118302d1159bf0acd8cfe005edb508199276288ff17d6630ff7f7307f10b1 
	I0729 19:50:04.175016 1119948 kubeadm.go:310] W0729 19:49:56.020841    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175395 1119948 kubeadm.go:310] W0729 19:49:56.021779    2986 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0729 19:50:04.175537 1119948 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:50:04.175567 1119948 cni.go:84] Creating CNI manager for ""
	I0729 19:50:04.175577 1119948 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:50:04.177050 1119948 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:50:04.178074 1119948 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:50:04.189753 1119948 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:50:04.212891 1119948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:50:04.213003 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.213014 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-843792 minikube.k8s.io/updated_at=2024_07_29T19_50_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ff26f50543a99741e8a988a34136ffea2b41dbf0 minikube.k8s.io/name=no-preload-843792 minikube.k8s.io/primary=true
	I0729 19:50:04.241948 1119948 ops.go:34] apiserver oom_adj: -16
	I0729 19:50:04.470011 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:04.970139 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.470618 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:05.970968 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.471036 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:06.970260 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.470060 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:07.970455 1119948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:50:08.091380 1119948 kubeadm.go:1113] duration metric: took 3.878454801s to wait for elevateKubeSystemPrivileges
	I0729 19:50:08.091420 1119948 kubeadm.go:394] duration metric: took 4m59.009669918s to StartCluster
	I0729 19:50:08.091442 1119948 settings.go:142] acquiring lock: {Name:mk8657322241b3b1f65443d6cee1b2ccb99f315e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.091531 1119948 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:50:08.093926 1119948 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1055011/kubeconfig: {Name:mkf834b33d9b214f3561db5b8f8958d26700afbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:50:08.094254 1119948 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.248 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:50:08.094349 1119948 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 19:50:08.094445 1119948 addons.go:69] Setting storage-provisioner=true in profile "no-preload-843792"
	I0729 19:50:08.094490 1119948 addons.go:234] Setting addon storage-provisioner=true in "no-preload-843792"
	I0729 19:50:08.094489 1119948 config.go:182] Loaded profile config "no-preload-843792": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	W0729 19:50:08.094502 1119948 addons.go:243] addon storage-provisioner should already be in state true
	I0729 19:50:08.094506 1119948 addons.go:69] Setting default-storageclass=true in profile "no-preload-843792"
	I0729 19:50:08.094537 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094545 1119948 addons.go:69] Setting metrics-server=true in profile "no-preload-843792"
	I0729 19:50:08.094555 1119948 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-843792"
	I0729 19:50:08.094567 1119948 addons.go:234] Setting addon metrics-server=true in "no-preload-843792"
	W0729 19:50:08.094576 1119948 addons.go:243] addon metrics-server should already be in state true
	I0729 19:50:08.094606 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.094992 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095014 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.094991 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095032 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095053 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.095990 1119948 out.go:177] * Verifying Kubernetes components...
	I0729 19:50:08.097297 1119948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:50:08.111086 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39031
	I0729 19:50:08.111172 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35951
	I0729 19:50:08.111530 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.111611 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.112076 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112096 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112212 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.112236 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.112601 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.112598 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.113192 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113222 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113195 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.113331 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.113688 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43039
	I0729 19:50:08.114065 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.114550 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.114573 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.115130 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.115340 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.118967 1119948 addons.go:234] Setting addon default-storageclass=true in "no-preload-843792"
	W0729 19:50:08.118988 1119948 addons.go:243] addon default-storageclass should already be in state true
	I0729 19:50:08.119018 1119948 host.go:66] Checking if "no-preload-843792" exists ...
	I0729 19:50:08.119367 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.119391 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.131330 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0729 19:50:08.131868 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132155 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0729 19:50:08.132404 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.132427 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.132485 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.132795 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133148 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.133167 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.133169 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.133541 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.133802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.135456 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.135939 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.137341 1119948 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:50:08.137345 1119948 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 19:50:08.139247 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:50:08.139281 1119948 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:50:08.139303 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.139373 1119948 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.139393 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:50:08.139411 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.143427 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143462 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 19:50:08.143636 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.143916 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.143982 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.143994 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144028 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.144061 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.144375 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144420 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.144425 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.144437 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.144564 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144608 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.144771 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144802 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.144836 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.144947 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.144951 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.145438 1119948 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:50:08.145468 1119948 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:50:08.162100 1119948 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0729 19:50:08.162705 1119948 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:50:08.163290 1119948 main.go:141] libmachine: Using API Version  1
	I0729 19:50:08.163312 1119948 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:50:08.163700 1119948 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:50:08.163887 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetState
	I0729 19:50:08.165757 1119948 main.go:141] libmachine: (no-preload-843792) Calling .DriverName
	I0729 19:50:08.165967 1119948 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.165983 1119948 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:50:08.166000 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHHostname
	I0729 19:50:08.169065 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169515 1119948 main.go:141] libmachine: (no-preload-843792) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:0e:8c", ip: ""} in network mk-no-preload-843792: {Iface:virbr2 ExpiryTime:2024-07-29 20:44:43 +0000 UTC Type:0 Mac:52:54:00:ae:0e:8c Iaid: IPaddr:192.168.50.248 Prefix:24 Hostname:no-preload-843792 Clientid:01:52:54:00:ae:0e:8c}
	I0729 19:50:08.169535 1119948 main.go:141] libmachine: (no-preload-843792) DBG | domain no-preload-843792 has defined IP address 192.168.50.248 and MAC address 52:54:00:ae:0e:8c in network mk-no-preload-843792
	I0729 19:50:08.169694 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHPort
	I0729 19:50:08.169850 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHKeyPath
	I0729 19:50:08.170030 1119948 main.go:141] libmachine: (no-preload-843792) Calling .GetSSHUsername
	I0729 19:50:08.170144 1119948 sshutil.go:53] new ssh client: &{IP:192.168.50.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/no-preload-843792/id_rsa Username:docker}
	I0729 19:50:08.279563 1119948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:50:08.297004 1119948 node_ready.go:35] waiting up to 6m0s for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308403 1119948 node_ready.go:49] node "no-preload-843792" has status "Ready":"True"
	I0729 19:50:08.308428 1119948 node_ready.go:38] duration metric: took 11.381814ms for node "no-preload-843792" to be "Ready" ...
	I0729 19:50:08.308437 1119948 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:08.326920 1119948 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:08.394482 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:50:08.394511 1119948 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 19:50:08.431819 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:50:08.431850 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:50:08.432280 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:50:08.452951 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:50:08.512078 1119948 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:08.512110 1119948 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:50:08.636490 1119948 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:50:09.357187 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357212 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357248 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357274 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357564 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357633 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357646 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.357659 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357662 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.357671 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357679 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.357682 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.357690 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.358945 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358969 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359019 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.359042 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.358989 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.359074 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.419421 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.419445 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.419864 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.419868 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.419905 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.938758 1119948 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302197805s)
	I0729 19:50:09.938827 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.938854 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939241 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939260 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939270 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.939273 1119948 main.go:141] libmachine: Making call to close driver server
	I0729 19:50:09.939284 1119948 main.go:141] libmachine: (no-preload-843792) Calling .Close
	I0729 19:50:09.939509 1119948 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:50:09.939526 1119948 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:50:09.939540 1119948 addons.go:475] Verifying addon metrics-server=true in "no-preload-843792"
	I0729 19:50:09.939558 1119948 main.go:141] libmachine: (no-preload-843792) DBG | Closing plugin on server side
	I0729 19:50:09.941050 1119948 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0729 19:50:09.942006 1119948 addons.go:510] duration metric: took 1.847661826s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0729 19:50:10.334878 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:12.834554 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:15.334388 1119948 pod_ready.go:102] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"False"
	I0729 19:50:16.843448 1119948 pod_ready.go:92] pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.843480 1119948 pod_ready.go:81] duration metric: took 8.516527239s for pod "coredns-5cfdc65f69-ck5zf" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.843494 1119948 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847567 1119948 pod_ready.go:92] pod "etcd-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.847588 1119948 pod_ready.go:81] duration metric: took 4.086961ms for pod "etcd-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.847597 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857374 1119948 pod_ready.go:92] pod "kube-apiserver-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.857395 1119948 pod_ready.go:81] duration metric: took 9.790628ms for pod "kube-apiserver-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.857403 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861971 1119948 pod_ready.go:92] pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.861990 1119948 pod_ready.go:81] duration metric: took 4.580287ms for pod "kube-controller-manager-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.861998 1119948 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.865992 1119948 pod_ready.go:92] pod "kube-scheduler-no-preload-843792" in "kube-system" namespace has status "Ready":"True"
	I0729 19:50:16.866006 1119948 pod_ready.go:81] duration metric: took 4.002585ms for pod "kube-scheduler-no-preload-843792" in "kube-system" namespace to be "Ready" ...
	I0729 19:50:16.866012 1119948 pod_ready.go:38] duration metric: took 8.557565808s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:50:16.866026 1119948 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:50:16.866069 1119948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:50:16.881797 1119948 api_server.go:72] duration metric: took 8.787509233s to wait for apiserver process to appear ...
	I0729 19:50:16.881817 1119948 api_server.go:88] waiting for apiserver healthz status ...
	I0729 19:50:16.881835 1119948 api_server.go:253] Checking apiserver healthz at https://192.168.50.248:8443/healthz ...
	I0729 19:50:16.886007 1119948 api_server.go:279] https://192.168.50.248:8443/healthz returned 200:
	ok
	I0729 19:50:16.886862 1119948 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 19:50:16.886882 1119948 api_server.go:131] duration metric: took 5.057536ms to wait for apiserver health ...
	I0729 19:50:16.886891 1119948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 19:50:17.034651 1119948 system_pods.go:59] 9 kube-system pods found
	I0729 19:50:17.034684 1119948 system_pods.go:61] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.034689 1119948 system_pods.go:61] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.034693 1119948 system_pods.go:61] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.034696 1119948 system_pods.go:61] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.034700 1119948 system_pods.go:61] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.034704 1119948 system_pods.go:61] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.034706 1119948 system_pods.go:61] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.034712 1119948 system_pods.go:61] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.034716 1119948 system_pods.go:61] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.034723 1119948 system_pods.go:74] duration metric: took 147.826766ms to wait for pod list to return data ...
	I0729 19:50:17.034731 1119948 default_sa.go:34] waiting for default service account to be created ...
	I0729 19:50:17.231811 1119948 default_sa.go:45] found service account: "default"
	I0729 19:50:17.231841 1119948 default_sa.go:55] duration metric: took 197.103306ms for default service account to be created ...
	I0729 19:50:17.231852 1119948 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 19:50:17.435766 1119948 system_pods.go:86] 9 kube-system pods found
	I0729 19:50:17.435801 1119948 system_pods.go:89] "coredns-5cfdc65f69-bk2nx" [662b0879-7c15-4ec3-a6b6-e49fd9597dcf] Running
	I0729 19:50:17.435809 1119948 system_pods.go:89] "coredns-5cfdc65f69-ck5zf" [ad6c9c9b-740c-464d-85c2-a9ae44663f63] Running
	I0729 19:50:17.435816 1119948 system_pods.go:89] "etcd-no-preload-843792" [e4cba264-21e2-499e-9768-417b316f6a04] Running
	I0729 19:50:17.435822 1119948 system_pods.go:89] "kube-apiserver-no-preload-843792" [24c2bd0e-2029-4985-836a-599ad2a2a7ab] Running
	I0729 19:50:17.435828 1119948 system_pods.go:89] "kube-controller-manager-no-preload-843792" [fb7ec8d7-5d48-428a-af99-f031d747fe2b] Running
	I0729 19:50:17.435835 1119948 system_pods.go:89] "kube-proxy-8hbrf" [3b64c7b2-cbed-4c0e-bc1b-2cef107b115c] Running
	I0729 19:50:17.435841 1119948 system_pods.go:89] "kube-scheduler-no-preload-843792" [fc166fdd-59e8-41f0-909c-71044da69f34] Running
	I0729 19:50:17.435849 1119948 system_pods.go:89] "metrics-server-78fcd8795b-fzt2k" [180acfb0-ec43-4f2e-b04a-048253d4b79e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 19:50:17.435856 1119948 system_pods.go:89] "storage-provisioner" [ee09516d-7ef7-4d66-9acf-7fd4cde3c673] Running
	I0729 19:50:17.435867 1119948 system_pods.go:126] duration metric: took 204.008054ms to wait for k8s-apps to be running ...
	I0729 19:50:17.435875 1119948 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 19:50:17.435926 1119948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:17.451816 1119948 system_svc.go:56] duration metric: took 15.929502ms WaitForService to wait for kubelet
	I0729 19:50:17.451848 1119948 kubeadm.go:582] duration metric: took 9.357563402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:50:17.451872 1119948 node_conditions.go:102] verifying NodePressure condition ...
	I0729 19:50:17.632427 1119948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 19:50:17.632465 1119948 node_conditions.go:123] node cpu capacity is 2
	I0729 19:50:17.632481 1119948 node_conditions.go:105] duration metric: took 180.602976ms to run NodePressure ...
	I0729 19:50:17.632497 1119948 start.go:241] waiting for startup goroutines ...
	I0729 19:50:17.632506 1119948 start.go:246] waiting for cluster config update ...
	I0729 19:50:17.632525 1119948 start.go:255] writing updated cluster config ...
	I0729 19:50:17.632908 1119948 ssh_runner.go:195] Run: rm -f paused
	I0729 19:50:17.687540 1119948 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 19:50:17.689409 1119948 out.go:177] * Done! kubectl is now configured to use "no-preload-843792" cluster and "default" namespace by default
	I0729 19:50:40.036000 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:50:40.036324 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:50:40.038447 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.038603 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.038790 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.039225 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.039617 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:40.039731 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:40.041420 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:40.041522 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:40.041589 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:40.041712 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:40.041810 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:40.041935 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:40.042019 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:40.042111 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:40.042190 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:40.042285 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:40.042401 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:40.042465 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:40.042535 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:40.042581 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:40.042628 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:40.042698 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:40.042781 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:40.042934 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:40.043061 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:40.043128 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:40.043208 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:40.044637 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:40.044750 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:40.044847 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:40.044908 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:40.044976 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:40.045145 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:50:40.045212 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:50:40.045276 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045442 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045511 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.045697 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.045797 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046043 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046153 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046441 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046567 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:50:40.046878 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:50:40.046894 1120970 kubeadm.go:310] 
	I0729 19:50:40.046945 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:50:40.047019 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:50:40.047039 1120970 kubeadm.go:310] 
	I0729 19:50:40.047104 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:50:40.047158 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:50:40.047301 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:50:40.047312 1120970 kubeadm.go:310] 
	I0729 19:50:40.047465 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:50:40.047513 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:50:40.047558 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:50:40.047567 1120970 kubeadm.go:310] 
	I0729 19:50:40.047728 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:50:40.047859 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:50:40.047870 1120970 kubeadm.go:310] 
	I0729 19:50:40.048028 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:50:40.048161 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:50:40.048274 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:50:40.048387 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:50:40.048422 1120970 kubeadm.go:310] 
	W0729 19:50:40.048546 1120970 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 19:50:40.048632 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 19:50:40.512123 1120970 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:50:40.526973 1120970 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:50:40.540285 1120970 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:50:40.540322 1120970 kubeadm.go:157] found existing configuration files:
	
	I0729 19:50:40.540390 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:50:40.550130 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:50:40.550188 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:50:40.560312 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:50:40.570460 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:50:40.570513 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:50:40.579979 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.589806 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:50:40.589848 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:50:40.599351 1120970 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:50:40.609134 1120970 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:50:40.609190 1120970 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:50:40.618767 1120970 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:50:40.686644 1120970 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 19:50:40.686775 1120970 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:50:40.844131 1120970 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:50:40.844252 1120970 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:50:40.844357 1120970 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:50:41.018497 1120970 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:50:41.020295 1120970 out.go:204]   - Generating certificates and keys ...
	I0729 19:50:41.020404 1120970 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:50:41.020471 1120970 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:50:41.020559 1120970 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 19:50:41.020614 1120970 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 19:50:41.020675 1120970 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 19:50:41.020720 1120970 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 19:50:41.021041 1120970 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 19:50:41.021463 1120970 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 19:50:41.021868 1120970 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 19:50:41.022329 1120970 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 19:50:41.022411 1120970 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 19:50:41.022503 1120970 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:50:41.204952 1120970 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:50:41.438572 1120970 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:50:41.878587 1120970 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:50:42.428806 1120970 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:50:42.447931 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:50:42.448990 1120970 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:50:42.449131 1120970 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:50:42.580942 1120970 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:50:42.582493 1120970 out.go:204]   - Booting up control plane ...
	I0729 19:50:42.582600 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:50:42.589862 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:50:42.590833 1120970 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:50:42.591685 1120970 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:50:42.594079 1120970 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 19:51:22.596326 1120970 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 19:51:22.596639 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:22.596846 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:27.597439 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:27.597671 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:37.598638 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:37.598811 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:51:57.599401 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:51:57.599704 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.597710 1120970 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 19:52:37.597992 1120970 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 19:52:37.598034 1120970 kubeadm.go:310] 
	I0729 19:52:37.598090 1120970 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 19:52:37.598166 1120970 kubeadm.go:310] 		timed out waiting for the condition
	I0729 19:52:37.598179 1120970 kubeadm.go:310] 
	I0729 19:52:37.598228 1120970 kubeadm.go:310] 	This error is likely caused by:
	I0729 19:52:37.598326 1120970 kubeadm.go:310] 		- The kubelet is not running
	I0729 19:52:37.598515 1120970 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 19:52:37.598528 1120970 kubeadm.go:310] 
	I0729 19:52:37.598671 1120970 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 19:52:37.598715 1120970 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 19:52:37.598777 1120970 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 19:52:37.598806 1120970 kubeadm.go:310] 
	I0729 19:52:37.598984 1120970 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 19:52:37.599100 1120970 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 19:52:37.599114 1120970 kubeadm.go:310] 
	I0729 19:52:37.599266 1120970 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 19:52:37.599393 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 19:52:37.599499 1120970 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 19:52:37.599617 1120970 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 19:52:37.599637 1120970 kubeadm.go:310] 
	I0729 19:52:37.600349 1120970 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:52:37.600471 1120970 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 19:52:37.600641 1120970 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 19:52:37.600707 1120970 kubeadm.go:394] duration metric: took 7m57.951835284s to StartCluster
	I0729 19:52:37.600799 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:52:37.600929 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:52:37.643870 1120970 cri.go:89] found id: ""
	I0729 19:52:37.643913 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.643921 1120970 logs.go:278] No container was found matching "kube-apiserver"
	I0729 19:52:37.643928 1120970 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:52:37.643993 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:52:37.679484 1120970 cri.go:89] found id: ""
	I0729 19:52:37.679519 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.679529 1120970 logs.go:278] No container was found matching "etcd"
	I0729 19:52:37.679535 1120970 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:52:37.679602 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:52:37.716326 1120970 cri.go:89] found id: ""
	I0729 19:52:37.716358 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.716366 1120970 logs.go:278] No container was found matching "coredns"
	I0729 19:52:37.716372 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:52:37.716427 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:52:37.751441 1120970 cri.go:89] found id: ""
	I0729 19:52:37.751468 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.751477 1120970 logs.go:278] No container was found matching "kube-scheduler"
	I0729 19:52:37.751483 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:52:37.751555 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:52:37.791309 1120970 cri.go:89] found id: ""
	I0729 19:52:37.791334 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.791343 1120970 logs.go:278] No container was found matching "kube-proxy"
	I0729 19:52:37.791354 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:52:37.791409 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:52:37.824637 1120970 cri.go:89] found id: ""
	I0729 19:52:37.824664 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.824674 1120970 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 19:52:37.824682 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:52:37.824749 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:52:37.863031 1120970 cri.go:89] found id: ""
	I0729 19:52:37.863060 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.863068 1120970 logs.go:278] No container was found matching "kindnet"
	I0729 19:52:37.863074 1120970 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 19:52:37.863134 1120970 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 19:52:37.905864 1120970 cri.go:89] found id: ""
	I0729 19:52:37.905918 1120970 logs.go:276] 0 containers: []
	W0729 19:52:37.905931 1120970 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0729 19:52:37.905945 1120970 logs.go:123] Gathering logs for kubelet ...
	I0729 19:52:37.905965 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 19:52:37.958561 1120970 logs.go:123] Gathering logs for dmesg ...
	I0729 19:52:37.958601 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:52:37.983602 1120970 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:52:37.983635 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 19:52:38.080775 1120970 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 19:52:38.080810 1120970 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:52:38.080827 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 19:52:38.185475 1120970 logs.go:123] Gathering logs for container status ...
	I0729 19:52:38.185512 1120970 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0729 19:52:38.227581 1120970 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 19:52:38.227653 1120970 out.go:239] * 
	W0729 19:52:38.227722 1120970 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.227748 1120970 out.go:239] * 
	W0729 19:52:38.228777 1120970 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 19:52:38.231684 1120970 out.go:177] 
	W0729 19:52:38.232618 1120970 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 19:52:38.232683 1120970 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 19:52:38.232710 1120970 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 19:52:38.234472 1120970 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.331873590Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283455331846863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c352844e-392d-4260-a7d4-34818082ad9d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.332419638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bcd7dac-4d1d-425a-803b-6af3689fe38f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.332494750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bcd7dac-4d1d-425a-803b-6af3689fe38f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.332528268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2bcd7dac-4d1d-425a-803b-6af3689fe38f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.363459483Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0c3d47a-8332-4f3d-b8eb-61e8178882ed name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.363541898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0c3d47a-8332-4f3d-b8eb-61e8178882ed name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.364334105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d3143b1-7535-4a75-9dd6-68ab0e0a89aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.364848700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283455364820145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d3143b1-7535-4a75-9dd6-68ab0e0a89aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.365274155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ef4c5be-dd0d-430e-b930-ed50c7dfedf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.365335819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ef4c5be-dd0d-430e-b930-ed50c7dfedf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.365371923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6ef4c5be-dd0d-430e-b930-ed50c7dfedf0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.394540461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=debe74c7-ac8b-46b1-9b2a-7589589188f9 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.394616555Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=debe74c7-ac8b-46b1-9b2a-7589589188f9 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.395726137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=429aea03-4e25-4923-b86a-03cffb00b626 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.396284740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283455396250179,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=429aea03-4e25-4923-b86a-03cffb00b626 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.396876036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11bfe54b-837b-41ea-9f1d-e4c897c7af24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.396944106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11bfe54b-837b-41ea-9f1d-e4c897c7af24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.396994490Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=11bfe54b-837b-41ea-9f1d-e4c897c7af24 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.425627313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e4cefee-f6cf-4be4-a97b-43cfaf250ead name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.425681683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e4cefee-f6cf-4be4-a97b-43cfaf250ead name=/runtime.v1.RuntimeService/Version
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.427428759Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25933bb5-8238-41a1-833f-3b33ed216752 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.427924250Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722283455427900839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25933bb5-8238-41a1-833f-3b33ed216752 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.428633892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=945700fc-8a8c-4e34-8433-1a440bd93779 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.428822442Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=945700fc-8a8c-4e34-8433-1a440bd93779 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:04:15 old-k8s-version-021528 crio[648]: time="2024-07-29 20:04:15.428906963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=945700fc-8a8c-4e34-8433-1a440bd93779 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jul29 19:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055089] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042985] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.117270] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.505686] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586022] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.594595] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.059829] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057895] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.197592] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.124559] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.248534] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +6.328570] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +0.064370] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.920147] systemd-fstab-generator[1022]: Ignoring "noauto" option for root device
	[ +12.715960] kauditd_printk_skb: 46 callbacks suppressed
	[Jul29 19:48] systemd-fstab-generator[5089]: Ignoring "noauto" option for root device
	[Jul29 19:50] systemd-fstab-generator[5365]: Ignoring "noauto" option for root device
	[  +0.071408] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:04:15 up 20 min,  0 users,  load average: 0.34, 0.11, 0.06
	Linux old-k8s-version-021528 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000a10000)
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: goroutine 113 [syscall]:
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: syscall.Syscall6(0xe8, 0xd, 0xc000c59b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000c59b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000aea2c0, 0x0, 0x0, 0x0)
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0005894a0)
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6872]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Jul 29 20:04:13 old-k8s-version-021528 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 29 20:04:13 old-k8s-version-021528 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 29 20:04:13 old-k8s-version-021528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 141.
	Jul 29 20:04:13 old-k8s-version-021528 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 29 20:04:13 old-k8s-version-021528 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6881]: I0729 20:04:13.786247    6881 server.go:416] Version: v1.20.0
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6881]: I0729 20:04:13.786455    6881 server.go:837] Client rotation is on, will bootstrap in background
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6881]: I0729 20:04:13.789100    6881 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6881]: W0729 20:04:13.790324    6881 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jul 29 20:04:13 old-k8s-version-021528 kubelet[6881]: I0729 20:04:13.790448    6881 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 2 (241.294792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-021528" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (151.98s)

                                                
                                    

Test pass (249/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 4.08
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.17
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 4.71
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.05
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.55
31 TestOffline 150.14
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 187.21
40 TestAddons/serial/GCPAuth/Namespaces 0.14
42 TestAddons/parallel/Registry 16.39
44 TestAddons/parallel/InspektorGadget 10.92
46 TestAddons/parallel/HelmTiller 9.4
48 TestAddons/parallel/CSI 52.3
49 TestAddons/parallel/Headlamp 19.21
50 TestAddons/parallel/CloudSpanner 6.21
51 TestAddons/parallel/LocalPath 51.56
52 TestAddons/parallel/NvidiaDevicePlugin 5.81
53 TestAddons/parallel/Yakd 11.87
55 TestCertOptions 43.25
56 TestCertExpiration 296.47
58 TestForceSystemdFlag 50.42
59 TestForceSystemdEnv 71.42
61 TestKVMDriverInstallOrUpdate 1.28
65 TestErrorSpam/setup 43.81
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.72
68 TestErrorSpam/pause 1.55
69 TestErrorSpam/unpause 1.6
70 TestErrorSpam/stop 4.96
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 57.82
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 35.62
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.33
82 TestFunctional/serial/CacheCmd/cache/add_local 1.06
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.04
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 39.8
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.37
93 TestFunctional/serial/LogsFileCmd 1.4
94 TestFunctional/serial/InvalidService 3.89
96 TestFunctional/parallel/ConfigCmd 0.33
97 TestFunctional/parallel/DashboardCmd 10.34
98 TestFunctional/parallel/DryRun 0.3
99 TestFunctional/parallel/InternationalLanguage 0.15
100 TestFunctional/parallel/StatusCmd 0.96
104 TestFunctional/parallel/ServiceCmdConnect 8.64
105 TestFunctional/parallel/AddonsCmd 0.14
106 TestFunctional/parallel/PersistentVolumeClaim 38.62
108 TestFunctional/parallel/SSHCmd 0.47
109 TestFunctional/parallel/CpCmd 1.44
110 TestFunctional/parallel/MySQL 28.1
111 TestFunctional/parallel/FileSync 0.28
112 TestFunctional/parallel/CertSync 1.66
116 TestFunctional/parallel/NodeLabels 0.08
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
120 TestFunctional/parallel/License 0.16
121 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
123 TestFunctional/parallel/ProfileCmd/profile_list 0.33
124 TestFunctional/parallel/MountCmd/any-port 7.49
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
135 TestFunctional/parallel/MountCmd/specific-port 1.99
136 TestFunctional/parallel/ServiceCmd/List 0.4
137 TestFunctional/parallel/MountCmd/VerifyCleanup 2
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
140 TestFunctional/parallel/ServiceCmd/Format 0.33
141 TestFunctional/parallel/ServiceCmd/URL 0.41
142 TestFunctional/parallel/Version/short 0.05
143 TestFunctional/parallel/Version/components 0.89
144 TestFunctional/parallel/ImageCommands/ImageListShort 0.79
145 TestFunctional/parallel/ImageCommands/ImageListTable 0.5
146 TestFunctional/parallel/ImageCommands/ImageListJson 0.42
147 TestFunctional/parallel/ImageCommands/ImageListYaml 0.57
149 TestFunctional/parallel/ImageCommands/Setup 0.39
150 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.84
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.4
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.84
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.16
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
160 TestFunctional/delete_echo-server_images 0.03
161 TestFunctional/delete_my-image_image 0.01
162 TestFunctional/delete_minikube_cached_images 0.01
166 TestMultiControlPlane/serial/StartCluster 195.85
167 TestMultiControlPlane/serial/DeployApp 4.61
168 TestMultiControlPlane/serial/PingHostFromPods 1.22
169 TestMultiControlPlane/serial/AddWorkerNode 83.51
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
172 TestMultiControlPlane/serial/CopyFile 12.59
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.09
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 345.85
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
183 TestMultiControlPlane/serial/AddSecondaryNode 78.06
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
188 TestJSONOutput/start/Command 97.64
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.7
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.6
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.34
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 89.28
220 TestMountStart/serial/StartWithMountFirst 30.94
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 26.9
223 TestMountStart/serial/VerifyMountSecond 0.36
224 TestMountStart/serial/DeleteFirst 0.68
225 TestMountStart/serial/VerifyMountPostDelete 0.36
226 TestMountStart/serial/Stop 1.27
227 TestMountStart/serial/RestartStopped 21.76
228 TestMountStart/serial/VerifyMountPostStop 0.36
231 TestMultiNode/serial/FreshStart2Nodes 117.67
232 TestMultiNode/serial/DeployApp2Nodes 3.62
233 TestMultiNode/serial/PingHostFrom2Pods 0.78
234 TestMultiNode/serial/AddNode 49.63
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.21
237 TestMultiNode/serial/CopyFile 7.03
238 TestMultiNode/serial/StopNode 2.31
239 TestMultiNode/serial/StartAfterStop 37.67
241 TestMultiNode/serial/DeleteNode 2.25
243 TestMultiNode/serial/RestartMultiNode 181.57
244 TestMultiNode/serial/ValidateNameConflict 44.01
251 TestScheduledStopUnix 114.35
255 TestRunningBinaryUpgrade 153.92
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
261 TestNoKubernetes/serial/StartWithK8s 118.23
262 TestNoKubernetes/serial/StartWithStopK8s 9.96
263 TestNoKubernetes/serial/Start 29.72
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
265 TestNoKubernetes/serial/ProfileList 1.05
266 TestNoKubernetes/serial/Stop 1.29
267 TestNoKubernetes/serial/StartNoArgs 43.08
275 TestNetworkPlugins/group/false 3
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
280 TestStoppedBinaryUpgrade/Setup 0.41
281 TestStoppedBinaryUpgrade/Upgrade 101.91
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.85
291 TestPause/serial/Start 61.94
292 TestNetworkPlugins/group/auto/Start 75.29
293 TestNetworkPlugins/group/kindnet/Start 109.87
294 TestNetworkPlugins/group/calico/Start 143.17
296 TestNetworkPlugins/group/auto/KubeletFlags 0.23
297 TestNetworkPlugins/group/auto/NetCatPod 11.31
298 TestNetworkPlugins/group/auto/DNS 0.18
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.12
301 TestNetworkPlugins/group/custom-flannel/Start 78.35
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
305 TestNetworkPlugins/group/kindnet/DNS 0.16
306 TestNetworkPlugins/group/kindnet/Localhost 0.14
307 TestNetworkPlugins/group/kindnet/HairPin 0.21
308 TestNetworkPlugins/group/enable-default-cni/Start 67.76
309 TestNetworkPlugins/group/flannel/Start 110
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.21
312 TestNetworkPlugins/group/calico/NetCatPod 10.23
313 TestNetworkPlugins/group/calico/DNS 0.18
314 TestNetworkPlugins/group/calico/Localhost 0.13
315 TestNetworkPlugins/group/calico/HairPin 0.12
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.6
318 TestNetworkPlugins/group/bridge/Start 114.06
319 TestNetworkPlugins/group/custom-flannel/DNS 0.16
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
330 TestStartStop/group/no-preload/serial/FirstStart 86.88
331 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.45
333 TestNetworkPlugins/group/flannel/NetCatPod 11.99
334 TestNetworkPlugins/group/flannel/DNS 0.18
335 TestNetworkPlugins/group/flannel/Localhost 0.13
336 TestNetworkPlugins/group/flannel/HairPin 0.13
338 TestStartStop/group/embed-certs/serial/FirstStart 62.81
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
340 TestNetworkPlugins/group/bridge/NetCatPod 11.26
341 TestNetworkPlugins/group/bridge/DNS 0.19
342 TestNetworkPlugins/group/bridge/Localhost 0.13
343 TestNetworkPlugins/group/bridge/HairPin 0.14
344 TestStartStop/group/no-preload/serial/DeployApp 10.32
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 61.52
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
349 TestStartStop/group/embed-certs/serial/DeployApp 7.29
350 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
352 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.26
353 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
356 TestStartStop/group/no-preload/serial/SecondStart 684.33
360 TestStartStop/group/embed-certs/serial/SecondStart 594.94
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 580.79
363 TestStartStop/group/old-k8s-version/serial/Stop 3.29
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
375 TestStartStop/group/newest-cni/serial/FirstStart 50.7
376 TestStartStop/group/newest-cni/serial/DeployApp 0
377 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
378 TestStartStop/group/newest-cni/serial/Stop 10.64
379 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
380 TestStartStop/group/newest-cni/serial/SecondStart 37.21
381 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
382 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
383 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
384 TestStartStop/group/newest-cni/serial/Pause 2.29
x
+
TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-385353 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-385353 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.083776943s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-385353
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-385353: exit status 85 (55.878061ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-385353 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |          |
	|         | -p download-only-385353        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:17:07
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:17:07.838733 1062285 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:17:07.839039 1062285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:07.839051 1062285 out.go:304] Setting ErrFile to fd 2...
	I0729 18:17:07.839055 1062285 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:07.839285 1062285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	W0729 18:17:07.839408 1062285 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19312-1055011/.minikube/config/config.json: open /home/jenkins/minikube-integration/19312-1055011/.minikube/config/config.json: no such file or directory
	I0729 18:17:07.839963 1062285 out.go:298] Setting JSON to true
	I0729 18:17:07.841122 1062285 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7180,"bootTime":1722269848,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:17:07.841181 1062285 start.go:139] virtualization: kvm guest
	I0729 18:17:07.843236 1062285 out.go:97] [download-only-385353] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 18:17:07.843345 1062285 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 18:17:07.843420 1062285 notify.go:220] Checking for updates...
	I0729 18:17:07.844561 1062285 out.go:169] MINIKUBE_LOCATION=19312
	I0729 18:17:07.845782 1062285 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:17:07.846968 1062285 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:17:07.848101 1062285 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:07.849192 1062285 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 18:17:07.851205 1062285 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 18:17:07.851558 1062285 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:17:07.882753 1062285 out.go:97] Using the kvm2 driver based on user configuration
	I0729 18:17:07.882777 1062285 start.go:297] selected driver: kvm2
	I0729 18:17:07.882782 1062285 start.go:901] validating driver "kvm2" against <nil>
	I0729 18:17:07.883137 1062285 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:07.883231 1062285 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1055011/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 18:17:07.898050 1062285 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 18:17:07.898105 1062285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 18:17:07.898667 1062285 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 18:17:07.898841 1062285 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 18:17:07.898887 1062285 cni.go:84] Creating CNI manager for ""
	I0729 18:17:07.898908 1062285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 18:17:07.898919 1062285 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 18:17:07.898992 1062285 start.go:340] cluster config:
	{Name:download-only-385353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-385353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:17:07.899216 1062285 iso.go:125] acquiring lock: {Name:mk0af61c0fec1fd47930e548d03010a532c687b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 18:17:07.901019 1062285 out.go:97] Downloading VM boot image ...
	I0729 18:17:07.901056 1062285 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 18:17:11.083140 1062285 out.go:97] Starting "download-only-385353" primary control-plane node in "download-only-385353" cluster
	I0729 18:17:11.083162 1062285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:17:11.107135 1062285 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 18:17:11.107176 1062285 cache.go:56] Caching tarball of preloaded images
	I0729 18:17:11.107344 1062285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 18:17:11.108979 1062285 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 18:17:11.108999 1062285 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 18:17:11.133107 1062285 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19312-1055011/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-385353 host does not exist
	  To start a cluster, run: "minikube start -p download-only-385353"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-385353
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-009744 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-009744 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.077325261s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-009744
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-009744: exit status 85 (56.499416ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-385353 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | -p download-only-385353        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| delete  | -p download-only-385353        | download-only-385353 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| start   | -o=json --download-only        | download-only-009744 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | -p download-only-009744        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:17:16
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:17:16.242746 1062490 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:17:16.242866 1062490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:16.242875 1062490 out.go:304] Setting ErrFile to fd 2...
	I0729 18:17:16.242881 1062490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:16.243060 1062490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:17:16.243617 1062490 out.go:298] Setting JSON to true
	I0729 18:17:16.244654 1062490 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7188,"bootTime":1722269848,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:17:16.244716 1062490 start.go:139] virtualization: kvm guest
	I0729 18:17:16.246526 1062490 out.go:97] [download-only-009744] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:17:16.246691 1062490 notify.go:220] Checking for updates...
	I0729 18:17:16.247846 1062490 out.go:169] MINIKUBE_LOCATION=19312
	I0729 18:17:16.248995 1062490 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:17:16.250058 1062490 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:17:16.250980 1062490 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:16.251996 1062490 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-009744 host does not exist
	  To start a cluster, run: "minikube start -p download-only-009744"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-009744
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (4.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-881045 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-881045 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.708745275s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (4.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-881045
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-881045: exit status 85 (54.146055ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-385353 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | -p download-only-385353             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| delete  | -p download-only-385353             | download-only-385353 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| start   | -o=json --download-only             | download-only-009744 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | -p download-only-009744             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| delete  | -p download-only-009744             | download-only-009744 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC | 29 Jul 24 18:17 UTC |
	| start   | -o=json --download-only             | download-only-881045 | jenkins | v1.33.1 | 29 Jul 24 18:17 UTC |                     |
	|         | -p download-only-881045             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 18:17:20
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 18:17:20.666931 1062681 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:17:20.667019 1062681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:20.667026 1062681 out.go:304] Setting ErrFile to fd 2...
	I0729 18:17:20.667031 1062681 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:17:20.667205 1062681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:17:20.667720 1062681 out.go:298] Setting JSON to true
	I0729 18:17:20.668740 1062681 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7193,"bootTime":1722269848,"procs":336,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:17:20.668798 1062681 start.go:139] virtualization: kvm guest
	I0729 18:17:20.670550 1062681 out.go:97] [download-only-881045] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:17:20.670687 1062681 notify.go:220] Checking for updates...
	I0729 18:17:20.671985 1062681 out.go:169] MINIKUBE_LOCATION=19312
	I0729 18:17:20.673204 1062681 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:17:20.674479 1062681 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:17:20.675502 1062681 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:17:20.676464 1062681 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-881045 host does not exist
	  To start a cluster, run: "minikube start -p download-only-881045"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-881045
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-644927 --alsologtostderr --binary-mirror http://127.0.0.1:46501 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-644927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-644927
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (150.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-254305 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-254305 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m29.120759466s)
helpers_test.go:175: Cleaning up "offline-crio-254305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-254305
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-254305: (1.015514225s)
--- PASS: TestOffline (150.14s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685520
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-685520: exit status 85 (49.815152ms)

                                                
                                                
-- stdout --
	* Profile "addons-685520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685520
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-685520: exit status 85 (46.287716ms)

                                                
                                                
-- stdout --
	* Profile "addons-685520" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685520"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (187.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-685520 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-685520 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m7.210830112s)
--- PASS: TestAddons/Setup (187.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-685520 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-685520 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.642683ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-grn4f" [ae9be054-2ae9-4bb2-91af-3a601d969805] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004848423s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sxvm2" [07822b9d-56b6-4aab-bce3-512310b7497f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006772069s
addons_test.go:342: (dbg) Run:  kubectl --context addons-685520 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-685520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-685520 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.173862496s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 ip
2024/07/29 18:21:07 [DEBUG] GET http://192.168.39.137:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable registry --alsologtostderr -v=1: (1.045929525s)
--- PASS: TestAddons/parallel/Registry (16.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wv76k" [a273bd33-1b70-4636-82b0-9f0a7eba9abd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005996989s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-685520
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-685520: (5.912332008s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.286533ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-nl6s4" [018ede57-0c16-4231-aab9-8a15f104da71] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004203102s
addons_test.go:475: (dbg) Run:  kubectl --context addons-685520 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-685520 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.654486551s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.779001ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-685520 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-685520 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d3efaa91-aed1-4555-a95c-039013f1f461] Pending
helpers_test.go:344: "task-pv-pod" [d3efaa91-aed1-4555-a95c-039013f1f461] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d3efaa91-aed1-4555-a95c-039013f1f461] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004042107s
addons_test.go:590: (dbg) Run:  kubectl --context addons-685520 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-685520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-685520 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-685520 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-685520 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-685520 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-685520 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [376c939b-d867-44b6-9fc3-9e5ba79c001d] Pending
helpers_test.go:344: "task-pv-pod-restore" [376c939b-d867-44b6-9fc3-9e5ba79c001d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [376c939b-d867-44b6-9fc3-9e5ba79c001d] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004280917s
addons_test.go:632: (dbg) Run:  kubectl --context addons-685520 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-685520 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-685520 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.6821506s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable volumesnapshots --alsologtostderr -v=1: (1.056839288s)
--- PASS: TestAddons/parallel/CSI (52.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-685520 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-685520 --alsologtostderr -v=1: (1.386549361s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-vhshf" [577d6e9d-095a-4986-9491-7ca5d8571d6a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-vhshf" [577d6e9d-095a-4986-9491-7ca5d8571d6a] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005799501s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable headlamp --alsologtostderr -v=1: (5.815680945s)
--- PASS: TestAddons/parallel/Headlamp (19.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-5hkr9" [c7b112c6-3052-4a3d-a847-e98a58105476] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003730096s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-685520
addons_test.go:870: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-685520: (1.203493617s)
--- PASS: TestAddons/parallel/CloudSpanner (6.21s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.56s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-685520 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-685520 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [40b54904-0e49-4b8b-9e05-3751ea5d5af9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [40b54904-0e49-4b8b-9e05-3751ea5d5af9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [40b54904-0e49-4b8b-9e05-3751ea5d5af9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.006086975s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-685520 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 ssh "cat /opt/local-path-provisioner/pvc-144acf15-a758-428b-874b-327ac7591c4a_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-685520 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-685520 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.744110919s)
--- PASS: TestAddons/parallel/LocalPath (51.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4bzd5" [0edbc902-4717-462e-8c98-1e0af3da0c72] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004774187s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-685520
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-fwm9g" [de22c47b-7ca5-438a-a075-869b15be1fe9] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004454206s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-685520 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-685520 addons disable yakd --alsologtostderr -v=1: (5.868494511s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestCertOptions (43.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-460863 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-460863 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (41.796970601s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-460863 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-460863 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-460863 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-460863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-460863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-460863: (1.017271306s)
--- PASS: TestCertOptions (43.25s)

                                                
                                    
x
+
TestCertExpiration (296.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-183319 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-183319 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.503121172s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-183319 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-183319 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (43.927513357s)
helpers_test.go:175: Cleaning up "cert-expiration-183319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-183319
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-183319: (1.033494165s)
--- PASS: TestCertExpiration (296.47s)

                                                
                                    
x
+
TestForceSystemdFlag (50.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-819021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-819021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (49.246967927s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-819021 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-819021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-819021
--- PASS: TestForceSystemdFlag (50.42s)

                                                
                                    
x
+
TestForceSystemdEnv (71.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-278841 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-278841 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.627333872s)
helpers_test.go:175: Cleaning up "force-systemd-env-278841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-278841
--- PASS: TestForceSystemdEnv (71.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.28s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.28s)

                                                
                                    
x
+
TestErrorSpam/setup (43.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-972607 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-972607 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-972607 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-972607 --driver=kvm2  --container-runtime=crio: (43.805600581s)
--- PASS: TestErrorSpam/setup (43.81s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 unpause
--- PASS: TestErrorSpam/unpause (1.60s)

                                                
                                    
x
+
TestErrorSpam/stop (4.96s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop: (2.3097862s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop: (1.335973542s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-972607 --log_dir /tmp/nospam-972607 stop: (1.309685794s)
--- PASS: TestErrorSpam/stop (4.96s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19312-1055011/.minikube/files/etc/test/nested/copy/1062272/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0729 18:30:34.135461 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.142128 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.152361 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.173252 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.213642 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.293935 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.454339 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:34.774921 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:35.415827 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:36.696422 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:39.257353 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:44.377862 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:30:54.619066 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:31:15.099513 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-728029 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.818065285s)
--- PASS: TestFunctional/serial/StartWithProxy (57.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --alsologtostderr -v=8
E0729 18:31:56.060239 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-728029 --alsologtostderr -v=8: (35.621152503s)
functional_test.go:663: soft start took 35.621878266s for "functional-728029" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-728029 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:3.1: (1.072176956s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:3.3: (1.168329408s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 cache add registry.k8s.io/pause:latest: (1.093867237s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-728029 /tmp/TestFunctionalserialCacheCmdcacheadd_local3283974014/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache add minikube-local-cache-test:functional-728029
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache delete minikube-local-cache-test:functional-728029
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-728029
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.566586ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 kubectl -- --context functional-728029 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-728029 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-728029 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.801618924s)
functional_test.go:761: restart took 39.801758401s for "functional-728029" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-728029 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 logs: (1.368990666s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 logs --file /tmp/TestFunctionalserialLogsFileCmd263985080/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 logs --file /tmp/TestFunctionalserialLogsFileCmd263985080/001/logs.txt: (1.39682813s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-728029 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-728029
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-728029: exit status 115 (276.090643ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.8:30876 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-728029 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 config get cpus: exit status 14 (60.103426ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 config get cpus: exit status 14 (47.725872ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728029 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728029 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1072187: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728029 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.588396ms)

                                                
                                                
-- stdout --
	* [functional-728029] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:33:11.559090 1071671 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:33:11.559441 1071671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:11.559458 1071671 out.go:304] Setting ErrFile to fd 2...
	I0729 18:33:11.559464 1071671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:11.559782 1071671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:33:11.560523 1071671 out.go:298] Setting JSON to false
	I0729 18:33:11.562001 1071671 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8143,"bootTime":1722269848,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:33:11.562085 1071671 start.go:139] virtualization: kvm guest
	I0729 18:33:11.564168 1071671 out.go:177] * [functional-728029] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 18:33:11.565276 1071671 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:33:11.565289 1071671 notify.go:220] Checking for updates...
	I0729 18:33:11.567277 1071671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:33:11.568420 1071671 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:33:11.569535 1071671 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:11.570526 1071671 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:33:11.571667 1071671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:33:11.573171 1071671 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:33:11.573562 1071671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:11.573658 1071671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:11.589983 1071671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39185
	I0729 18:33:11.590383 1071671 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:11.591090 1071671 main.go:141] libmachine: Using API Version  1
	I0729 18:33:11.591117 1071671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:11.591435 1071671 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:11.591631 1071671 main.go:141] libmachine: (functional-728029) Calling .DriverName
	I0729 18:33:11.591844 1071671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:33:11.592142 1071671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:11.592200 1071671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:11.608363 1071671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I0729 18:33:11.608896 1071671 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:11.609357 1071671 main.go:141] libmachine: Using API Version  1
	I0729 18:33:11.609381 1071671 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:11.609653 1071671 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:11.609838 1071671 main.go:141] libmachine: (functional-728029) Calling .DriverName
	I0729 18:33:11.642025 1071671 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 18:33:11.643105 1071671 start.go:297] selected driver: kvm2
	I0729 18:33:11.643117 1071671 start.go:901] validating driver "kvm2" against &{Name:functional-728029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-728029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:33:11.643228 1071671 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:33:11.645006 1071671 out.go:177] 
	W0729 18:33:11.646082 1071671 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 18:33:11.647262 1071671 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728029 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728029 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.895882ms)

                                                
                                                
-- stdout --
	* [functional-728029] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 18:33:11.851252 1071744 out.go:291] Setting OutFile to fd 1 ...
	I0729 18:33:11.851410 1071744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:11.851417 1071744 out.go:304] Setting ErrFile to fd 2...
	I0729 18:33:11.851423 1071744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 18:33:11.851787 1071744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 18:33:11.852444 1071744 out.go:298] Setting JSON to false
	I0729 18:33:11.853829 1071744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":8144,"bootTime":1722269848,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 18:33:11.853908 1071744 start.go:139] virtualization: kvm guest
	I0729 18:33:11.856080 1071744 out.go:177] * [functional-728029] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 18:33:11.857292 1071744 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 18:33:11.857318 1071744 notify.go:220] Checking for updates...
	I0729 18:33:11.859704 1071744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 18:33:11.861063 1071744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 18:33:11.862399 1071744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 18:33:11.863947 1071744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 18:33:11.865228 1071744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 18:33:11.866917 1071744 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 18:33:11.867513 1071744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:11.867564 1071744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:11.890328 1071744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0729 18:33:11.890818 1071744 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:11.891434 1071744 main.go:141] libmachine: Using API Version  1
	I0729 18:33:11.891456 1071744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:11.891837 1071744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:11.892004 1071744 main.go:141] libmachine: (functional-728029) Calling .DriverName
	I0729 18:33:11.892251 1071744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 18:33:11.892542 1071744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 18:33:11.892574 1071744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 18:33:11.907988 1071744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0729 18:33:11.908526 1071744 main.go:141] libmachine: () Calling .GetVersion
	I0729 18:33:11.908972 1071744 main.go:141] libmachine: Using API Version  1
	I0729 18:33:11.908993 1071744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 18:33:11.909357 1071744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 18:33:11.909525 1071744 main.go:141] libmachine: (functional-728029) Calling .DriverName
	I0729 18:33:11.941926 1071744 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 18:33:11.943092 1071744 start.go:297] selected driver: kvm2
	I0729 18:33:11.943105 1071744 start.go:901] validating driver "kvm2" against &{Name:functional-728029 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-728029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 18:33:11.943238 1071744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 18:33:11.945202 1071744 out.go:177] 
	W0729 18:33:11.946385 1071744 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 18:33:11.947454 1071744 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-728029 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-728029 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-wnrzw" [b8440532-878a-40d0-bbdc-399849e26de5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-wnrzw" [b8440532-878a-40d0-bbdc-399849e26de5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004495162s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.8:31697
functional_test.go:1675: http://192.168.39.8:31697: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-wnrzw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.8:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.8:31697
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01eb039b-797d-4806-8f50-55d452bef8a5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003577948s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-728029 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-728029 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-728029 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728029 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c9ac599-7fab-419f-ae3a-559a26e2cc36] Pending
helpers_test.go:344: "sp-pod" [1c9ac599-7fab-419f-ae3a-559a26e2cc36] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c9ac599-7fab-419f-ae3a-559a26e2cc36] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004861489s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-728029 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-728029 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-728029 delete -f testdata/storage-provisioner/pod.yaml: (2.714522643s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728029 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c770844-5ea0-4ef8-919c-3ecc742e0714] Pending
helpers_test.go:344: "sp-pod" [2c770844-5ea0-4ef8-919c-3ecc742e0714] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c770844-5ea0-4ef8-919c-3ecc742e0714] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.005048871s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-728029 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh -n functional-728029 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cp functional-728029:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4057923410/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh -n functional-728029 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh -n functional-728029 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-728029 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-sqntb" [d668f3a6-e32e-43bc-a584-40eb81ed505f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-sqntb" [d668f3a6-e32e-43bc-a584-40eb81ed505f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.008861229s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-728029 exec mysql-64454c8b5c-sqntb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-728029 exec mysql-64454c8b5c-sqntb -- mysql -ppassword -e "show databases;": exit status 1 (350.469908ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-728029 exec mysql-64454c8b5c-sqntb -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-728029 exec mysql-64454c8b5c-sqntb -- mysql -ppassword -e "show databases;": exit status 1 (131.128769ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-728029 exec mysql-64454c8b5c-sqntb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1062272/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /etc/test/nested/copy/1062272/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1062272.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /etc/ssl/certs/1062272.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1062272.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /usr/share/ca-certificates/1062272.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/10622722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /etc/ssl/certs/10622722.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/10622722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /usr/share/ca-certificates/10622722.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-728029 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "sudo systemctl is-active docker": exit status 1 (304.805777ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "sudo systemctl is-active containerd": exit status 1 (234.535629ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-728029 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-728029 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-mjjwh" [0873a523-a48e-4812-afd4-9f89b10b5c57] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-mjjwh" [0873a523-a48e-4812-afd4-9f89b10b5c57] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004610689s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "257.996397ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "68.407186ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdany-port1649159433/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722277981722207985" to /tmp/TestFunctionalparallelMountCmdany-port1649159433/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722277981722207985" to /tmp/TestFunctionalparallelMountCmdany-port1649159433/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722277981722207985" to /tmp/TestFunctionalparallelMountCmdany-port1649159433/001/test-1722277981722207985
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.38664ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 18:33 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 18:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 18:33 test-1722277981722207985
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh cat /mount-9p/test-1722277981722207985
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-728029 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [99f7b733-1aee-4ca2-9c7e-6fc59d388c65] Pending
helpers_test.go:344: "busybox-mount" [99f7b733-1aee-4ca2-9c7e-6fc59d388c65] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [99f7b733-1aee-4ca2-9c7e-6fc59d388c65] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [99f7b733-1aee-4ca2-9c7e-6fc59d388c65] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004205823s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-728029 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdany-port1649159433/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "323.867288ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.54823ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdspecific-port1136104795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (195.718106ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdspecific-port1136104795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "sudo umount -f /mount-9p": exit status 1 (230.576867ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-728029 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdspecific-port1136104795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T" /mount1: exit status 1 (329.621308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-728029 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728029 /tmp/TestFunctionalparallelMountCmdVerifyCleanup413205873/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service list -o json
functional_test.go:1494: Took "392.347319ms" to run "out/minikube-linux-amd64 -p functional-728029 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.8:30180
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.8:30180
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728029 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-728029
localhost/kicbase/echo-server:functional-728029
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728029 image ls --format short --alsologtostderr:
I0729 18:33:24.008875 1072824 out.go:291] Setting OutFile to fd 1 ...
I0729 18:33:24.009007 1072824 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.009018 1072824 out.go:304] Setting ErrFile to fd 2...
I0729 18:33:24.009025 1072824 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.009346 1072824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
I0729 18:33:24.010173 1072824 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.010332 1072824 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.010934 1072824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.010998 1072824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.028301 1072824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
I0729 18:33:24.028881 1072824 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.029591 1072824 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.029618 1072824 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.029953 1072824 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.030128 1072824 main.go:141] libmachine: (functional-728029) Calling .GetState
I0729 18:33:24.032146 1072824 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.032193 1072824 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.048762 1072824 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35869
I0729 18:33:24.049179 1072824 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.049724 1072824 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.049750 1072824 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.050050 1072824 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.050232 1072824 main.go:141] libmachine: (functional-728029) Calling .DriverName
I0729 18:33:24.050416 1072824 ssh_runner.go:195] Run: systemctl --version
I0729 18:33:24.050444 1072824 main.go:141] libmachine: (functional-728029) Calling .GetSSHHostname
I0729 18:33:24.053094 1072824 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.053435 1072824 main.go:141] libmachine: (functional-728029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:13:09", ip: ""} in network mk-functional-728029: {Iface:virbr1 ExpiryTime:2024-07-29 19:30:48 +0000 UTC Type:0 Mac:52:54:00:de:13:09 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-728029 Clientid:01:52:54:00:de:13:09}
I0729 18:33:24.053476 1072824 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined IP address 192.168.39.8 and MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.053732 1072824 main.go:141] libmachine: (functional-728029) Calling .GetSSHPort
I0729 18:33:24.053873 1072824 main.go:141] libmachine: (functional-728029) Calling .GetSSHKeyPath
I0729 18:33:24.054002 1072824 main.go:141] libmachine: (functional-728029) Calling .GetSSHUsername
I0729 18:33:24.054117 1072824 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/functional-728029/id_rsa Username:docker}
I0729 18:33:24.191563 1072824 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 18:33:24.640250 1072824 main.go:141] libmachine: Making call to close driver server
I0729 18:33:24.640268 1072824 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:24.640519 1072824 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:24.640536 1072824 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:24.640552 1072824 main.go:141] libmachine: Making call to close driver server
I0729 18:33:24.640560 1072824 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:24.640786 1072824 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:24.640811 1072824 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:24.640830 1072824 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728029 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-728029  | d46c2df8c6f78 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/kicbase/echo-server           | functional-728029  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728029 image ls --format table --alsologtostderr:
I0729 18:33:25.210904 1072945 out.go:291] Setting OutFile to fd 1 ...
I0729 18:33:25.211037 1072945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:25.211047 1072945 out.go:304] Setting ErrFile to fd 2...
I0729 18:33:25.211052 1072945 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:25.211254 1072945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
I0729 18:33:25.211816 1072945 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:25.211916 1072945 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:25.212335 1072945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:25.212376 1072945 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:25.227240 1072945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35915
I0729 18:33:25.227741 1072945 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:25.228362 1072945 main.go:141] libmachine: Using API Version  1
I0729 18:33:25.228387 1072945 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:25.228767 1072945 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:25.228993 1072945 main.go:141] libmachine: (functional-728029) Calling .GetState
I0729 18:33:25.231046 1072945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:25.231084 1072945 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:25.246502 1072945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
I0729 18:33:25.246932 1072945 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:25.247428 1072945 main.go:141] libmachine: Using API Version  1
I0729 18:33:25.247451 1072945 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:25.247771 1072945 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:25.247981 1072945 main.go:141] libmachine: (functional-728029) Calling .DriverName
I0729 18:33:25.248219 1072945 ssh_runner.go:195] Run: systemctl --version
I0729 18:33:25.248262 1072945 main.go:141] libmachine: (functional-728029) Calling .GetSSHHostname
I0729 18:33:25.250769 1072945 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:25.251120 1072945 main.go:141] libmachine: (functional-728029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:13:09", ip: ""} in network mk-functional-728029: {Iface:virbr1 ExpiryTime:2024-07-29 19:30:48 +0000 UTC Type:0 Mac:52:54:00:de:13:09 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-728029 Clientid:01:52:54:00:de:13:09}
I0729 18:33:25.251158 1072945 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined IP address 192.168.39.8 and MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:25.251254 1072945 main.go:141] libmachine: (functional-728029) Calling .GetSSHPort
I0729 18:33:25.251405 1072945 main.go:141] libmachine: (functional-728029) Calling .GetSSHKeyPath
I0729 18:33:25.251564 1072945 main.go:141] libmachine: (functional-728029) Calling .GetSSHUsername
I0729 18:33:25.251700 1072945 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/functional-728029/id_rsa Username:docker}
I0729 18:33:25.376780 1072945 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 18:33:25.663077 1072945 main.go:141] libmachine: Making call to close driver server
I0729 18:33:25.663100 1072945 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:25.663395 1072945 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:25.663416 1072945 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:25.663435 1072945 main.go:141] libmachine: Making call to close driver server
I0729 18:33:25.663445 1072945 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:25.663684 1072945 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:25.663697 1072945 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728029 image ls --format json --alsologtostderr:
[{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-728029"],"size":"4943877"},{"i
d":"d46c2df8c6f78b4d82b28dc2f993884a708a8a4994b163071235f6756f183a0b","repoDigests":["localhost/minikube-local-cache-test@sha256:e3ac8edcbe9b64472c10ca255f5366ab943a4add97ac34fae2eee9b26738cfa5"],"repoTags":["localhost/minikube-local-cache-test:functional-728029"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha
256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1
d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95
441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"
],"size":"686139"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:
2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728029 image ls --format json --alsologtostderr:
I0729 18:33:24.794911 1072898 out.go:291] Setting OutFile to fd 1 ...
I0729 18:33:24.795064 1072898 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.795076 1072898 out.go:304] Setting ErrFile to fd 2...
I0729 18:33:24.795084 1072898 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:24.795388 1072898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
I0729 18:33:24.796296 1072898 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.796459 1072898 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:24.797110 1072898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.797171 1072898 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.814459 1072898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
I0729 18:33:24.814997 1072898 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.815661 1072898 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.815682 1072898 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.816094 1072898 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.816340 1072898 main.go:141] libmachine: (functional-728029) Calling .GetState
I0729 18:33:24.818390 1072898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.818438 1072898 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.834477 1072898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35947
I0729 18:33:24.834916 1072898 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.835489 1072898 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.835514 1072898 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.835968 1072898 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.836192 1072898 main.go:141] libmachine: (functional-728029) Calling .DriverName
I0729 18:33:24.836439 1072898 ssh_runner.go:195] Run: systemctl --version
I0729 18:33:24.836470 1072898 main.go:141] libmachine: (functional-728029) Calling .GetSSHHostname
I0729 18:33:24.838990 1072898 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.839428 1072898 main.go:141] libmachine: (functional-728029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:13:09", ip: ""} in network mk-functional-728029: {Iface:virbr1 ExpiryTime:2024-07-29 19:30:48 +0000 UTC Type:0 Mac:52:54:00:de:13:09 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-728029 Clientid:01:52:54:00:de:13:09}
I0729 18:33:24.839460 1072898 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined IP address 192.168.39.8 and MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.839535 1072898 main.go:141] libmachine: (functional-728029) Calling .GetSSHPort
I0729 18:33:24.839700 1072898 main.go:141] libmachine: (functional-728029) Calling .GetSSHKeyPath
I0729 18:33:24.839822 1072898 main.go:141] libmachine: (functional-728029) Calling .GetSSHUsername
I0729 18:33:24.839951 1072898 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/functional-728029/id_rsa Username:docker}
I0729 18:33:24.957904 1072898 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 18:33:25.156636 1072898 main.go:141] libmachine: Making call to close driver server
I0729 18:33:25.156653 1072898 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:25.156957 1072898 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:25.157049 1072898 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:25.157072 1072898 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:25.157082 1072898 main.go:141] libmachine: Making call to close driver server
I0729 18:33:25.157094 1072898 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:25.157369 1072898 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:25.157400 1072898 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:25.157429 1072898 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728029 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: d46c2df8c6f78b4d82b28dc2f993884a708a8a4994b163071235f6756f183a0b
repoDigests:
- localhost/minikube-local-cache-test@sha256:e3ac8edcbe9b64472c10ca255f5366ab943a4add97ac34fae2eee9b26738cfa5
repoTags:
- localhost/minikube-local-cache-test:functional-728029
size: "3330"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-728029
size: "4943877"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728029 image ls --format yaml --alsologtostderr:
I0729 18:33:23.996745 1072823 out.go:291] Setting OutFile to fd 1 ...
I0729 18:33:23.997039 1072823 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:23.997051 1072823 out.go:304] Setting ErrFile to fd 2...
I0729 18:33:23.997058 1072823 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 18:33:23.997283 1072823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
I0729 18:33:23.997869 1072823 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:23.997997 1072823 config.go:182] Loaded profile config "functional-728029": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 18:33:23.998384 1072823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:23.998440 1072823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.015505 1072823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
I0729 18:33:24.015996 1072823 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.016633 1072823 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.016656 1072823 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.016980 1072823 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.017196 1072823 main.go:141] libmachine: (functional-728029) Calling .GetState
I0729 18:33:24.019089 1072823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 18:33:24.019138 1072823 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 18:33:24.034657 1072823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
I0729 18:33:24.035350 1072823 main.go:141] libmachine: () Calling .GetVersion
I0729 18:33:24.035868 1072823 main.go:141] libmachine: Using API Version  1
I0729 18:33:24.035895 1072823 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 18:33:24.036261 1072823 main.go:141] libmachine: () Calling .GetMachineName
I0729 18:33:24.036435 1072823 main.go:141] libmachine: (functional-728029) Calling .DriverName
I0729 18:33:24.036641 1072823 ssh_runner.go:195] Run: systemctl --version
I0729 18:33:24.036687 1072823 main.go:141] libmachine: (functional-728029) Calling .GetSSHHostname
I0729 18:33:24.039690 1072823 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.040171 1072823 main.go:141] libmachine: (functional-728029) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:de:13:09", ip: ""} in network mk-functional-728029: {Iface:virbr1 ExpiryTime:2024-07-29 19:30:48 +0000 UTC Type:0 Mac:52:54:00:de:13:09 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:functional-728029 Clientid:01:52:54:00:de:13:09}
I0729 18:33:24.040206 1072823 main.go:141] libmachine: (functional-728029) DBG | domain functional-728029 has defined IP address 192.168.39.8 and MAC address 52:54:00:de:13:09 in network mk-functional-728029
I0729 18:33:24.040495 1072823 main.go:141] libmachine: (functional-728029) Calling .GetSSHPort
I0729 18:33:24.040697 1072823 main.go:141] libmachine: (functional-728029) Calling .GetSSHKeyPath
I0729 18:33:24.040850 1072823 main.go:141] libmachine: (functional-728029) Calling .GetSSHUsername
I0729 18:33:24.040980 1072823 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/functional-728029/id_rsa Username:docker}
I0729 18:33:24.171617 1072823 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 18:33:24.388897 1072823 main.go:141] libmachine: Making call to close driver server
I0729 18:33:24.388911 1072823 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:24.389282 1072823 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:24.389315 1072823 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:24.389322 1072823 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 18:33:24.389335 1072823 main.go:141] libmachine: Making call to close driver server
I0729 18:33:24.389346 1072823 main.go:141] libmachine: (functional-728029) Calling .Close
I0729 18:33:24.389655 1072823 main.go:141] libmachine: (functional-728029) DBG | Closing plugin on server side
I0729 18:33:24.389675 1072823 main.go:141] libmachine: Successfully made call to close driver server
I0729 18:33:24.389703 1072823 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-728029
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image load --daemon kicbase/echo-server:functional-728029 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 image load --daemon kicbase/echo-server:functional-728029 --alsologtostderr: (2.585883628s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image load --daemon kicbase/echo-server:functional-728029 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-728029
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image load --daemon kicbase/echo-server:functional-728029 --alsologtostderr
E0729 18:33:17.981189 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image save kicbase/echo-server:functional-728029 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-728029 image save kicbase/echo-server:functional-728029 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.402261652s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image rm kicbase/echo-server:functional-728029 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
2024/07/29 18:33:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-728029
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-728029 image save --daemon kicbase/echo-server:functional-728029 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-728029
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-728029
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-728029
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-728029
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (195.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344156 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 18:35:34.134880 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 18:36:01.822039 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-344156 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.179973993s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (195.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-344156 -- rollout status deployment/busybox: (2.497151545s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-9sbfq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-np547 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-q7sxh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-9sbfq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-np547 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-q7sxh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-9sbfq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-np547 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-q7sxh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-9sbfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-9sbfq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-np547 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-np547 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-q7sxh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344156 -- exec busybox-fc5497c4f-q7sxh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (83.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-344156 -v=7 --alsologtostderr
E0729 18:38:00.968804 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:00.974122 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:00.984420 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:01.004756 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:01.045116 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:01.125518 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:01.285969 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:01.606523 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:02.246968 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:03.527540 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:06.088057 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:11.208470 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:38:21.448959 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-344156 -v=7 --alsologtostderr: (1m22.681373101s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (83.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-344156 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp testdata/cp-test.txt ha-344156:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156:/home/docker/cp-test.txt ha-344156-m02:/home/docker/cp-test_ha-344156_ha-344156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test_ha-344156_ha-344156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156:/home/docker/cp-test.txt ha-344156-m03:/home/docker/cp-test_ha-344156_ha-344156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test_ha-344156_ha-344156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156:/home/docker/cp-test.txt ha-344156-m04:/home/docker/cp-test_ha-344156_ha-344156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test_ha-344156_ha-344156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp testdata/cp-test.txt ha-344156-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m02:/home/docker/cp-test.txt ha-344156:/home/docker/cp-test_ha-344156-m02_ha-344156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test_ha-344156-m02_ha-344156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m02:/home/docker/cp-test.txt ha-344156-m03:/home/docker/cp-test_ha-344156-m02_ha-344156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test_ha-344156-m02_ha-344156-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m02:/home/docker/cp-test.txt ha-344156-m04:/home/docker/cp-test_ha-344156-m02_ha-344156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test_ha-344156-m02_ha-344156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp testdata/cp-test.txt ha-344156-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt ha-344156:/home/docker/cp-test_ha-344156-m03_ha-344156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test_ha-344156-m03_ha-344156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt ha-344156-m02:/home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test_ha-344156-m03_ha-344156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m03:/home/docker/cp-test.txt ha-344156-m04:/home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test_ha-344156-m03_ha-344156-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp testdata/cp-test.txt ha-344156-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile289939917/001/cp-test_ha-344156-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt ha-344156:/home/docker/cp-test_ha-344156-m04_ha-344156.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156 "sudo cat /home/docker/cp-test_ha-344156-m04_ha-344156.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt ha-344156-m02:/home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt
E0729 18:38:41.929390 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m02 "sudo cat /home/docker/cp-test_ha-344156-m04_ha-344156-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 cp ha-344156-m04:/home/docker/cp-test.txt ha-344156-m03:/home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 ssh -n ha-344156-m03 "sudo cat /home/docker/cp-test_ha-344156-m04_ha-344156-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.499606684s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-344156 node delete m03 -v=7 --alsologtostderr: (16.364091882s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (345.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344156 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 18:53:00.968313 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:54:24.011474 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 18:55:34.135066 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-344156 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.076996904s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (345.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-344156 --control-plane -v=7 --alsologtostderr
E0729 18:58:00.968805 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-344156 --control-plane -v=7 --alsologtostderr: (1m17.226511213s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-344156 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-597671 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-597671 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.638269448s)
--- PASS: TestJSONOutput/start/Command (97.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-597671 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-597671 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-597671 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-597671 --output=json --user=testUser: (7.337376286s)
--- PASS: TestJSONOutput/stop/Command (7.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-202889 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-202889 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.917135ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a66f8bd-b512-412d-a12c-e2cd62358985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-202889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f729c25d-eda4-49be-a23e-0ef77d93133c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"1eaec347-4ba2-4cf1-869a-70e28a346232","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"125859c5-7694-4aa2-a652-66d805bf3a14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig"}}
	{"specversion":"1.0","id":"96dc019d-a547-4834-aca5-f813a9ebf971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube"}}
	{"specversion":"1.0","id":"c4518b29-fa5b-4105-8464-64b56e2efc2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"329a38b5-e71a-40e9-a5e6-bac8633f4090","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ae8ca425-a4c1-4681-be25-1ea2179953cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-202889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-202889
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (89.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-199904 --driver=kvm2  --container-runtime=crio
E0729 19:00:34.137455 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-199904 --driver=kvm2  --container-runtime=crio: (43.585269232s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-202846 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-202846 --driver=kvm2  --container-runtime=crio: (43.04154791s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-199904
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-202846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-202846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-202846
helpers_test.go:175: Cleaning up "first-199904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-199904
--- PASS: TestMinikubeProfile (89.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (30.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-831293 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-831293 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.941170073s)
--- PASS: TestMountStart/serial/StartWithMountFirst (30.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-831293 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-831293 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-860327 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-860327 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.894961102s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-831293 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-860327
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-860327: (1.274190833s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-860327
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-860327: (20.757010311s)
--- PASS: TestMountStart/serial/RestartStopped (21.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-860327 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (117.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370772 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 19:03:00.968516 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
E0729 19:03:37.182550 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370772 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m57.261492729s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (117.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-370772 -- rollout status deployment/busybox: (2.128503021s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6l2ht -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6ppmr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6l2ht -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6ppmr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6l2ht -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6ppmr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6l2ht -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6l2ht -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6ppmr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-370772 -- exec busybox-fc5497c4f-6ppmr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-370772 -v 3 --alsologtostderr
E0729 19:05:34.135060 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-370772 -v 3 --alsologtostderr: (49.07667168s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-370772 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp testdata/cp-test.txt multinode-370772:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772:/home/docker/cp-test.txt multinode-370772-m02:/home/docker/cp-test_multinode-370772_multinode-370772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test_multinode-370772_multinode-370772-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772:/home/docker/cp-test.txt multinode-370772-m03:/home/docker/cp-test_multinode-370772_multinode-370772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test_multinode-370772_multinode-370772-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp testdata/cp-test.txt multinode-370772-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt multinode-370772:/home/docker/cp-test_multinode-370772-m02_multinode-370772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test_multinode-370772-m02_multinode-370772.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m02:/home/docker/cp-test.txt multinode-370772-m03:/home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test_multinode-370772-m02_multinode-370772-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp testdata/cp-test.txt multinode-370772-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile623438728/001/cp-test_multinode-370772-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt multinode-370772:/home/docker/cp-test_multinode-370772-m03_multinode-370772.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772 "sudo cat /home/docker/cp-test_multinode-370772-m03_multinode-370772.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 cp multinode-370772-m03:/home/docker/cp-test.txt multinode-370772-m02:/home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 ssh -n multinode-370772-m02 "sudo cat /home/docker/cp-test_multinode-370772-m03_multinode-370772-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-370772 node stop m03: (1.458051175s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370772 status: exit status 7 (429.38916ms)

                                                
                                                
-- stdout --
	multinode-370772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370772-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370772-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr: exit status 7 (420.776716ms)

                                                
                                                
-- stdout --
	multinode-370772
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-370772-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-370772-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:05:50.227055 1090408 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:05:50.227166 1090408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:05:50.227174 1090408 out.go:304] Setting ErrFile to fd 2...
	I0729 19:05:50.227178 1090408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:05:50.227371 1090408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:05:50.227525 1090408 out.go:298] Setting JSON to false
	I0729 19:05:50.227552 1090408 mustload.go:65] Loading cluster: multinode-370772
	I0729 19:05:50.227596 1090408 notify.go:220] Checking for updates...
	I0729 19:05:50.227913 1090408 config.go:182] Loaded profile config "multinode-370772": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:05:50.227928 1090408 status.go:255] checking status of multinode-370772 ...
	I0729 19:05:50.228305 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.228348 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.247500 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0729 19:05:50.248001 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.248599 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.248616 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.248960 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.249124 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetState
	I0729 19:05:50.250804 1090408 status.go:330] multinode-370772 host status = "Running" (err=<nil>)
	I0729 19:05:50.250824 1090408 host.go:66] Checking if "multinode-370772" exists ...
	I0729 19:05:50.251125 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.251161 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.265948 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I0729 19:05:50.266386 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.266887 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.266913 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.267197 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.267367 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetIP
	I0729 19:05:50.269817 1090408 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:05:50.270198 1090408 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:05:50.270226 1090408 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:05:50.270367 1090408 host.go:66] Checking if "multinode-370772" exists ...
	I0729 19:05:50.270768 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.270810 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.286785 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34133
	I0729 19:05:50.287134 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.287558 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.287581 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.287868 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.288064 1090408 main.go:141] libmachine: (multinode-370772) Calling .DriverName
	I0729 19:05:50.288268 1090408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 19:05:50.288289 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetSSHHostname
	I0729 19:05:50.290825 1090408 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:05:50.291238 1090408 main.go:141] libmachine: (multinode-370772) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:42:f8", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:03:03 +0000 UTC Type:0 Mac:52:54:00:0a:42:f8 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-370772 Clientid:01:52:54:00:0a:42:f8}
	I0729 19:05:50.291264 1090408 main.go:141] libmachine: (multinode-370772) DBG | domain multinode-370772 has defined IP address 192.168.39.180 and MAC address 52:54:00:0a:42:f8 in network mk-multinode-370772
	I0729 19:05:50.291378 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetSSHPort
	I0729 19:05:50.291520 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetSSHKeyPath
	I0729 19:05:50.291642 1090408 main.go:141] libmachine: (multinode-370772) Calling .GetSSHUsername
	I0729 19:05:50.291782 1090408 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772/id_rsa Username:docker}
	I0729 19:05:50.379833 1090408 ssh_runner.go:195] Run: systemctl --version
	I0729 19:05:50.385729 1090408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:05:50.400055 1090408 kubeconfig.go:125] found "multinode-370772" server: "https://192.168.39.180:8443"
	I0729 19:05:50.400082 1090408 api_server.go:166] Checking apiserver status ...
	I0729 19:05:50.400127 1090408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 19:05:50.413157 1090408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup
	W0729 19:05:50.422205 1090408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 19:05:50.422272 1090408 ssh_runner.go:195] Run: ls
	I0729 19:05:50.426708 1090408 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0729 19:05:50.431778 1090408 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0729 19:05:50.431800 1090408 status.go:422] multinode-370772 apiserver status = Running (err=<nil>)
	I0729 19:05:50.431810 1090408 status.go:257] multinode-370772 status: &{Name:multinode-370772 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 19:05:50.431832 1090408 status.go:255] checking status of multinode-370772-m02 ...
	I0729 19:05:50.432137 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.432177 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.447552 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38185
	I0729 19:05:50.447923 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.448389 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.448408 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.448732 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.448924 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetState
	I0729 19:05:50.450321 1090408 status.go:330] multinode-370772-m02 host status = "Running" (err=<nil>)
	I0729 19:05:50.450337 1090408 host.go:66] Checking if "multinode-370772-m02" exists ...
	I0729 19:05:50.450628 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.450665 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.465579 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I0729 19:05:50.466010 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.466503 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.466523 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.466780 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.466964 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetIP
	I0729 19:05:50.469263 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | domain multinode-370772-m02 has defined MAC address 52:54:00:12:73:68 in network mk-multinode-370772
	I0729 19:05:50.469615 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:73:68", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:04:15 +0000 UTC Type:0 Mac:52:54:00:12:73:68 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-370772-m02 Clientid:01:52:54:00:12:73:68}
	I0729 19:05:50.469642 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | domain multinode-370772-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:12:73:68 in network mk-multinode-370772
	I0729 19:05:50.469771 1090408 host.go:66] Checking if "multinode-370772-m02" exists ...
	I0729 19:05:50.470171 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.470214 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.484804 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34609
	I0729 19:05:50.485204 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.485657 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.485677 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.485924 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.486097 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .DriverName
	I0729 19:05:50.486250 1090408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 19:05:50.486273 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetSSHHostname
	I0729 19:05:50.488601 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | domain multinode-370772-m02 has defined MAC address 52:54:00:12:73:68 in network mk-multinode-370772
	I0729 19:05:50.489006 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:73:68", ip: ""} in network mk-multinode-370772: {Iface:virbr1 ExpiryTime:2024-07-29 20:04:15 +0000 UTC Type:0 Mac:52:54:00:12:73:68 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:multinode-370772-m02 Clientid:01:52:54:00:12:73:68}
	I0729 19:05:50.489036 1090408 main.go:141] libmachine: (multinode-370772-m02) DBG | domain multinode-370772-m02 has defined IP address 192.168.39.127 and MAC address 52:54:00:12:73:68 in network mk-multinode-370772
	I0729 19:05:50.489136 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetSSHPort
	I0729 19:05:50.489310 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetSSHKeyPath
	I0729 19:05:50.489443 1090408 main.go:141] libmachine: (multinode-370772-m02) Calling .GetSSHUsername
	I0729 19:05:50.489609 1090408 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1055011/.minikube/machines/multinode-370772-m02/id_rsa Username:docker}
	I0729 19:05:50.570478 1090408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 19:05:50.584872 1090408 status.go:257] multinode-370772-m02 status: &{Name:multinode-370772-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 19:05:50.584935 1090408 status.go:255] checking status of multinode-370772-m03 ...
	I0729 19:05:50.585276 1090408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:05:50.585326 1090408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:05:50.600883 1090408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I0729 19:05:50.601283 1090408 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:05:50.601766 1090408 main.go:141] libmachine: Using API Version  1
	I0729 19:05:50.601793 1090408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:05:50.602087 1090408 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:05:50.602295 1090408 main.go:141] libmachine: (multinode-370772-m03) Calling .GetState
	I0729 19:05:50.603830 1090408 status.go:330] multinode-370772-m03 host status = "Stopped" (err=<nil>)
	I0729 19:05:50.603843 1090408 status.go:343] host is not running, skipping remaining checks
	I0729 19:05:50.603858 1090408 status.go:257] multinode-370772-m03 status: &{Name:multinode-370772-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-370772 node start m03 -v=7 --alsologtostderr: (37.050249151s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-370772 node delete m03: (1.735481692s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (181.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370772 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 19:15:34.136908 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370772 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.058959682s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-370772 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (181.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-370772
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370772-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-370772-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.66866ms)

                                                
                                                
-- stdout --
	* [multinode-370772-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-370772-m02' is duplicated with machine name 'multinode-370772-m02' in profile 'multinode-370772'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-370772-m03 --driver=kvm2  --container-runtime=crio
E0729 19:18:00.968589 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-370772-m03 --driver=kvm2  --container-runtime=crio: (42.695188891s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-370772
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-370772: exit status 80 (218.533947ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-370772 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-370772-m03 already exists in multinode-370772-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-370772-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-370772-m03: (1.001041291s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.01s)

                                                
                                    
x
+
TestScheduledStopUnix (114.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-868480 --memory=2048 --driver=kvm2  --container-runtime=crio
E0729 19:23:00.969014 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-868480 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.813398281s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-868480 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-868480 -n scheduled-stop-868480
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-868480 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-868480 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-868480 -n scheduled-stop-868480
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-868480
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-868480 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-868480
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-868480: exit status 7 (65.636317ms)

                                                
                                                
-- stdout --
	scheduled-stop-868480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-868480 -n scheduled-stop-868480
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-868480 -n scheduled-stop-868480: exit status 7 (65.047484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-868480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-868480
--- PASS: TestScheduledStopUnix (114.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (153.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2187939109 start -p running-upgrade-933580 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2187939109 start -p running-upgrade-933580 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.307774059s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-933580 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-933580 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m24.819958568s)
helpers_test.go:175: Cleaning up "running-upgrade-933580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-933580
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-933580: (1.261119612s)
--- PASS: TestRunningBinaryUpgrade (153.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.3552ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-370700] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (118.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-370700 --driver=kvm2  --container-runtime=crio
E0729 19:25:34.134983 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-370700 --driver=kvm2  --container-runtime=crio: (1m57.984507156s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-370700 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (118.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --driver=kvm2  --container-runtime=crio: (8.659803937s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-370700 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-370700 status -o json: exit status 2 (257.18194ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-370700","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-370700
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-370700: (1.043663511s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-370700 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.723825323s)
--- PASS: TestNoKubernetes/serial/Start (29.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-370700 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-370700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.911901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-370700
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-370700: (1.287202096s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-370700 --driver=kvm2  --container-runtime=crio
E0729 19:27:44.013270 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-370700 --driver=kvm2  --container-runtime=crio: (43.081038288s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-184620 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-184620 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (106.599614ms)

                                                
                                                
-- stdout --
	* [false-184620] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:28:00.729888 1101277 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:28:00.730028 1101277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:28:00.730041 1101277 out.go:304] Setting ErrFile to fd 2...
	I0729 19:28:00.730047 1101277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:28:00.730352 1101277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1055011/.minikube/bin
	I0729 19:28:00.731199 1101277 out.go:298] Setting JSON to false
	I0729 19:28:00.732654 1101277 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":11433,"bootTime":1722269848,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:28:00.732729 1101277 start.go:139] virtualization: kvm guest
	I0729 19:28:00.734933 1101277 out.go:177] * [false-184620] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:28:00.736139 1101277 out.go:177]   - MINIKUBE_LOCATION=19312
	I0729 19:28:00.736153 1101277 notify.go:220] Checking for updates...
	I0729 19:28:00.738422 1101277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:28:00.739674 1101277 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1055011/kubeconfig
	I0729 19:28:00.740719 1101277 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1055011/.minikube
	I0729 19:28:00.741718 1101277 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:28:00.742766 1101277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:28:00.744331 1101277 config.go:182] Loaded profile config "NoKubernetes-370700": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0729 19:28:00.744426 1101277 config.go:182] Loaded profile config "cert-expiration-183319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:28:00.744535 1101277 config.go:182] Loaded profile config "kubernetes-upgrade-261955": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 19:28:00.744629 1101277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:28:00.781414 1101277 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:28:00.782497 1101277 start.go:297] selected driver: kvm2
	I0729 19:28:00.782514 1101277 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:28:00.782529 1101277 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:28:00.784627 1101277 out.go:177] 
	W0729 19:28:00.785884 1101277 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0729 19:28:00.787112 1101277 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-184620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.202:8443
name: cert-expiration-183319
contexts:
- context:
cluster: cert-expiration-183319
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-183319
name: cert-expiration-183319
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-183319
user:
client-certificate: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.crt
client-key: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-184620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-184620"

                                                
                                                
----------------------- debugLogs end: false-184620 [took: 2.753825731s] --------------------------------
helpers_test.go:175: Cleaning up "false-184620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-184620
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-370700 "sudo systemctl is-active --quiet service kubelet"
E0729 19:28:00.969125 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/functional-728029/client.crt: no such file or directory
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-370700 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.26726ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4000622346 start -p stopped-upgrade-336676 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4000622346 start -p stopped-upgrade-336676 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.400735688s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4000622346 -p stopped-upgrade-336676 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4000622346 -p stopped-upgrade-336676 stop: (2.138633318s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-336676 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-336676 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.367812264s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-336676
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.85s)

                                                
                                    
x
+
TestPause/serial/Start (61.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-464015 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0729 19:30:34.135025 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-464015 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m1.940174097s)
--- PASS: TestPause/serial/Start (61.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m15.292530472s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (109.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m49.872018245s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (109.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (143.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m23.170716129s)
--- PASS: TestNetworkPlugins/group/calico/Start (143.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vf5tg" [b3514b21-fc93-42e5-adfb-20eeec03ff3c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vf5tg" [b3514b21-fc93-42e5-adfb-20eeec03ff3c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004147483s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m18.346015122s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hl8pg" [925715bf-d83a-4751-89ff-ec55d6858a74] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004648822s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m6ppg" [fafef86b-37ad-4983-9384-f04ea13c5d1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m6ppg" [fafef86b-37ad-4983-9384-f04ea13c5d1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006112755s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m7.755756675s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (110s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m49.998482356s)
--- PASS: TestNetworkPlugins/group/flannel/Start (110.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-94msm" [74c5d050-8351-42ac-84f9-94ed7c7969fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005622648s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9z79h" [bd1fa4e8-e63a-4430-9463-96027bb78db2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9z79h" [bd1fa4e8-e63a-4430-9463-96027bb78db2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004129246s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h4tx6" [ecc808c4-c40d-40eb-9cf6-fff57faf39bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h4tx6" [ecc808c4-c40d-40eb-9cf6-fff57faf39bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.008962181s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (114.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-184620 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m54.058709654s)
--- PASS: TestNetworkPlugins/group/bridge/Start (114.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5r4tb" [0e9a3ddf-97c2-4c6a-a435-fba36ee1dbbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5r4tb" [0e9a3ddf-97c2-4c6a-a435-fba36ee1dbbb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003842085s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-843792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-843792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (1m26.883785732s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hw7b4" [534ef128-cf54-4e1e-9ce0-3c7924777e2c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006057677s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rvd6x" [d6545377-78a9-4beb-950b-1f27faf35442] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rvd6x" [d6545377-78a9-4beb-950b-1f27faf35442] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004157383s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-358053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-358053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m2.814355067s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-184620 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-184620 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m7tbj" [b3cc6bdd-8a39-4b1a-bae3-d920ae693609] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m7tbj" [b3cc6bdd-8a39-4b1a-bae3-d920ae693609] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004550984s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-184620 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-184620 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)
E0729 20:05:01.508468 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-843792 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8f5fac22-90e8-4389-8c07-4c496850d1f1] Pending
helpers_test.go:344: "busybox" [8f5fac22-90e8-4389-8c07-4c496850d1f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8f5fac22-90e8-4389-8c07-4c496850d1f1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005135293s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-843792 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-024652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-024652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (1m1.519205424s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (61.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-843792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-843792 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-358053 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [46ec4db0-9e68-4a90-80ee-c538b8efa052] Pending
helpers_test.go:344: "busybox" [46ec4db0-9e68-4a90-80ee-c538b8efa052] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [46ec4db0-9e68-4a90-80ee-c538b8efa052] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004199666s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-358053 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-358053 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-358053 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ff07714d-a051-4ce2-a4e7-c49bab27874f] Pending
E0729 19:37:17.090658 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/auto-184620/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ff07714d-a051-4ce2-a4e7-c49bab27874f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ff07714d-a051-4ce2-a4e7-c49bab27874f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.003908336s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-024652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-024652 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (684.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-843792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 19:38:54.373079 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:38:56.950899 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:38:57.372171 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/calico-184620/client.crt: no such file or directory
E0729 19:39:04.613717 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-843792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (11m24.0502772s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-843792 -n no-preload-843792
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (684.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (594.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-358053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 19:39:24.803715 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:39:25.094298 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-358053 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m54.670071142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-358053 -n embed-certs-358053
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (594.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (580.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-024652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3
E0729 19:40:01.508039 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.513385 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.523679 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.544010 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.584338 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.664675 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:01.825154 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:02.145803 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:02.786788 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:04.067563 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:06.054540 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/custom-flannel-184620/client.crt: no such file or directory
E0729 19:40:06.627805 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:11.748259 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:18.872097 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/kindnet-184620/client.crt: no such file or directory
E0729 19:40:21.988907 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:34.134692 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
E0729 19:40:36.486496 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/enable-default-cni-184620/client.crt: no such file or directory
E0729 19:40:42.469150 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/flannel-184620/client.crt: no such file or directory
E0729 19:40:46.437336 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.442594 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.452879 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.473124 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.513416 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.593777 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:46.754316 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:47.075082 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:47.716028 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:48.996678 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
E0729 19:40:51.557167 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-024652 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.3: (9m40.537496535s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-024652 -n default-k8s-diff-port-024652
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (580.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-021528 --alsologtostderr -v=3
E0729 19:40:56.677369 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/bridge-184620/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-021528 --alsologtostderr -v=3: (3.286634971s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-021528 -n old-k8s-version-021528: exit status 7 (61.185083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-021528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (50.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-584186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-584186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (50.699375192s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (50.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-584186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-584186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071513139s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-584186 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-584186 --alsologtostderr -v=3: (10.639896738s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-584186 -n newest-cni-584186
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-584186 -n newest-cni-584186: exit status 7 (63.83375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-584186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-584186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0
E0729 20:05:34.134544 1062272 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/addons-685520/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-584186 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0-beta.0: (36.958756369s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-584186 -n newest-cni-584186
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-584186 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-584186 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-584186 -n newest-cni-584186
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-584186 -n newest-cni-584186: exit status 2 (230.617397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-584186 -n newest-cni-584186
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-584186 -n newest-cni-584186: exit status 2 (232.442179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-584186 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-584186 -n newest-cni-584186
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-584186 -n newest-cni-584186
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    

Test skip (40/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
270 TestNetworkPlugins/group/kubenet 2.86
279 TestNetworkPlugins/group/cilium 3.19
288 TestStartStop/group/disable-driver-mounts 0.31
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-184620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.202:8443
name: cert-expiration-183319
contexts:
- context:
cluster: cert-expiration-183319
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-183319
name: cert-expiration-183319
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-183319
user:
client-certificate: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.crt
client-key: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-184620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-184620"

                                                
                                                
----------------------- debugLogs end: kubenet-184620 [took: 2.694307442s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-184620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-184620
--- SKIP: TestNetworkPlugins/group/kubenet (2.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-184620 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-184620" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19312-1055011/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.72.202:8443
name: cert-expiration-183319
contexts:
- context:
cluster: cert-expiration-183319
extensions:
- extension:
last-update: Mon, 29 Jul 2024 19:26:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-183319
name: cert-expiration-183319
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-183319
user:
client-certificate: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.crt
client-key: /home/jenkins/minikube-integration/19312-1055011/.minikube/profiles/cert-expiration-183319/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-184620

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-184620" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-184620"

                                                
                                                
----------------------- debugLogs end: cilium-184620 [took: 3.054488113s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-184620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-184620
--- SKIP: TestNetworkPlugins/group/cilium (3.19s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-251895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-251895
--- SKIP: TestStartStop/group/disable-driver-mounts (0.31s)

                                                
                                    
Copied to clipboard